Test Report: Docker_Linux_crio_arm64 22427

                    
                      f815509b9ccb41a33be05aa7241c338e7909bf25:2026-01-10:43184
                    
                

Test fail (27/332)

x
+
TestAddons/serial/Volcano (0.74s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable volcano --alsologtostderr -v=1: exit status 11 (741.920328ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:21.338321  316699 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:21.339769  316699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:21.339789  316699 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:21.339797  316699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:21.340109  316699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:21.340441  316699 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:21.340852  316699 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:21.340879  316699 addons.go:622] checking whether the cluster is paused
	I0110 09:15:21.340991  316699 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:21.341007  316699 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:21.341505  316699 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:21.358656  316699 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:21.358716  316699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:21.377630  316699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:21.479438  316699 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:21.479525  316699 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:21.510782  316699 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:21.510825  316699 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:21.510830  316699 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:21.510834  316699 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:21.510838  316699 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:21.510842  316699 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:21.510845  316699 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:21.510866  316699 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:21.510875  316699 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:21.510887  316699 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:21.510914  316699 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:21.510924  316699 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:21.510940  316699 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:21.510950  316699 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:21.510954  316699 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:21.510964  316699 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:21.510982  316699 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:21.510994  316699 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:21.510997  316699 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:21.511000  316699 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:21.511005  316699 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:21.511009  316699 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:21.511012  316699 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:21.511015  316699 cri.go:96] found id: ""
	I0110 09:15:21.511091  316699 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:21.526040  316699 out.go:203] 
	W0110 09:15:21.529103  316699 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:21.529148  316699 out.go:285] * 
	* 
	W0110 09:15:21.991000  316699 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:21.994169  316699 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.74s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 5.258196ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003924453s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003678516s
addons_test.go:394: (dbg) Run:  kubectl --context addons-502860 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-502860 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-502860 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.321614514s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable registry --alsologtostderr -v=1: exit status 11 (290.775274ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:48.931234  317667 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:48.931961  317667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:48.931974  317667 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:48.931981  317667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:48.932241  317667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:48.932567  317667 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:48.932950  317667 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:48.932974  317667 addons.go:622] checking whether the cluster is paused
	I0110 09:15:48.933080  317667 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:48.933095  317667 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:48.933606  317667 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:48.951465  317667 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:48.951534  317667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:48.976480  317667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:49.088103  317667 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:49.088207  317667 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:49.128066  317667 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:49.128084  317667 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:49.128089  317667 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:49.128093  317667 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:49.128096  317667 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:49.128100  317667 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:49.128103  317667 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:49.128106  317667 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:49.128109  317667 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:49.128116  317667 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:49.128119  317667 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:49.128122  317667 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:49.128125  317667 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:49.128128  317667 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:49.128131  317667 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:49.128136  317667 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:49.128140  317667 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:49.128144  317667 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:49.128147  317667 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:49.128150  317667 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:49.128155  317667 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:49.128158  317667 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:49.128161  317667 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:49.128164  317667 cri.go:96] found id: ""
	I0110 09:15:49.128213  317667 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:49.152611  317667 out.go:203] 
	W0110 09:15:49.157233  317667 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:49.157257  317667 out.go:285] * 
	* 
	W0110 09:15:49.160737  317667 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:49.163807  317667 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.91s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.46s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.557068ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-502860
addons_test.go:334: (dbg) Run:  kubectl --context addons-502860 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (250.455592ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:18.765409  319204 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:18.766255  319204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:18.766269  319204 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:18.766274  319204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:18.766559  319204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:18.766836  319204 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:18.767206  319204 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:18.767228  319204 addons.go:622] checking whether the cluster is paused
	I0110 09:16:18.767334  319204 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:18.767349  319204 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:18.767860  319204 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:18.784580  319204 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:18.784645  319204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:18.800995  319204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:18.904525  319204 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:18.904612  319204 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:18.938610  319204 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:18.938631  319204 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:18.938636  319204 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:18.938640  319204 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:18.938644  319204 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:18.938647  319204 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:18.938650  319204 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:18.938653  319204 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:18.938661  319204 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:18.938671  319204 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:18.938674  319204 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:18.938677  319204 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:18.938684  319204 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:18.938687  319204 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:18.938690  319204 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:18.938695  319204 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:18.938698  319204 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:18.938702  319204 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:18.938705  319204 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:18.938708  319204 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:18.938712  319204 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:18.938715  319204 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:18.938718  319204 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:18.938721  319204 cri.go:96] found id: ""
	I0110 09:16:18.938771  319204 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:18.953721  319204 out.go:203] 
	W0110 09:16:18.956582  319204 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:18.956602  319204 out.go:285] * 
	* 
	W0110 09:16:18.959804  319204 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:18.962673  319204 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-502860 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-502860 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-502860 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [19d14bf8-7418-49c6-b81a-e9a397a656f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [19d14bf8-7418-49c6-b81a-e9a397a656f6] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002878776s
I0110 09:16:11.539421  309898 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-502860 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (292.928061ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:12.605067  318864 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:12.606831  318864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:12.606901  318864 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:12.606933  318864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:12.607370  318864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:12.607817  318864 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:12.608354  318864 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:12.608415  318864 addons.go:622] checking whether the cluster is paused
	I0110 09:16:12.608651  318864 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:12.608723  318864 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:12.609360  318864 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:12.628455  318864 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:12.628561  318864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:12.655159  318864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:12.772853  318864 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:12.772931  318864 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:12.820589  318864 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:12.820609  318864 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:12.820615  318864 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:12.820618  318864 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:12.820635  318864 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:12.820641  318864 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:12.820645  318864 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:12.820648  318864 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:12.820661  318864 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:12.820670  318864 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:12.820674  318864 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:12.820686  318864 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:12.820689  318864 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:12.820692  318864 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:12.820695  318864 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:12.820702  318864 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:12.820705  318864 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:12.820709  318864 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:12.820712  318864 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:12.820715  318864 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:12.820720  318864 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:12.820726  318864 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:12.820729  318864 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:12.820734  318864 cri.go:96] found id: ""
	I0110 09:16:12.820784  318864 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:12.839138  318864 out.go:203] 
	W0110 09:16:12.843048  318864 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:12.843087  318864 out.go:285] * 
	* 
	W0110 09:16:12.846326  318864 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:12.849551  318864 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable ingress --alsologtostderr -v=1: exit status 11 (347.762659ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:12.922988  318921 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:12.923728  318921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:12.923763  318921 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:12.923783  318921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:12.924090  318921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:12.924445  318921 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:12.924918  318921 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:12.924966  318921 addons.go:622] checking whether the cluster is paused
	I0110 09:16:12.925114  318921 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:12.925145  318921 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:12.925692  318921 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:12.960049  318921 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:12.960117  318921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:12.981333  318921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:13.105976  318921 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:13.106066  318921 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:13.160320  318921 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:13.160347  318921 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:13.160352  318921 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:13.160356  318921 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:13.160360  318921 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:13.160367  318921 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:13.160371  318921 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:13.160374  318921 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:13.160378  318921 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:13.160388  318921 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:13.160392  318921 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:13.160395  318921 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:13.160398  318921 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:13.160405  318921 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:13.160408  318921 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:13.160420  318921 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:13.160423  318921 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:13.160427  318921 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:13.160430  318921 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:13.160434  318921 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:13.160438  318921 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:13.160441  318921 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:13.160444  318921 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:13.160447  318921 cri.go:96] found id: ""
	I0110 09:16:13.160533  318921 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:13.182946  318921 out.go:203] 
	W0110 09:16:13.186667  318921 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:13.186698  318921 out.go:285] * 
	* 
	W0110 09:16:13.190297  318921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:13.195089  318921 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (12.33s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-jswfc" [a349505b-e38c-4b7d-9281-9ec5e7c4309e] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006576427s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (292.594695ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:18.286247  319153 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:18.290431  319153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:18.290494  319153 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:18.290515  319153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:18.290867  319153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:18.291215  319153 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:18.291705  319153 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:18.291767  319153 addons.go:622] checking whether the cluster is paused
	I0110 09:16:18.291948  319153 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:18.292004  319153 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:18.292695  319153 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:18.318754  319153 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:18.318817  319153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:18.337033  319153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:18.439399  319153 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:18.439491  319153 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:18.477489  319153 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:18.477512  319153 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:18.477517  319153 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:18.477521  319153 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:18.477524  319153 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:18.477528  319153 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:18.477530  319153 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:18.477533  319153 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:18.477536  319153 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:18.477542  319153 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:18.477545  319153 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:18.477553  319153 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:18.477557  319153 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:18.477560  319153 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:18.477564  319153 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:18.477569  319153 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:18.477572  319153 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:18.477576  319153 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:18.477579  319153 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:18.477582  319153 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:18.477586  319153 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:18.477590  319153 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:18.477593  319153 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:18.477596  319153 cri.go:96] found id: ""
	I0110 09:16:18.477644  319153 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:18.493405  319153 out.go:203] 
	W0110 09:16:18.496360  319153 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:18.496401  319153 out.go:285] * 
	* 
	W0110 09:16:18.499658  319153 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:18.502833  319153 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 10.677403ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005125464s
addons_test.go:465: (dbg) Run:  kubectl --context addons-502860 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (312.033751ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:00.617109  318189 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:00.618064  318189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:00.618081  318189 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:00.618087  318189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:00.618355  318189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:00.618865  318189 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:00.619423  318189 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:00.619486  318189 addons.go:622] checking whether the cluster is paused
	I0110 09:16:00.619720  318189 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:00.619735  318189 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:00.620305  318189 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:00.645870  318189 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:00.645936  318189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:00.666499  318189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:00.787340  318189 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:00.787436  318189 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:00.828666  318189 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:00.828691  318189 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:00.828697  318189 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:00.828706  318189 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:00.828710  318189 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:00.828713  318189 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:00.828717  318189 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:00.828720  318189 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:00.828723  318189 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:00.828730  318189 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:00.828734  318189 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:00.828737  318189 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:00.828741  318189 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:00.828745  318189 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:00.828756  318189 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:00.828765  318189 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:00.828769  318189 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:00.828774  318189 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:00.828777  318189 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:00.828780  318189 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:00.828785  318189 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:00.828788  318189 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:00.828791  318189 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:00.828794  318189 cri.go:96] found id: ""
	I0110 09:16:00.828847  318189 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:00.852561  318189 out.go:203] 
	W0110 09:16:00.855535  318189 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:00.855563  318189 out.go:285] * 
	* 
	W0110 09:16:00.858932  318189 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:00.862005  318189 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 09:15:58.596895  309898 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 09:15:58.602945  309898 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 09:15:58.602970  309898 kapi.go:107] duration metric: took 6.090879ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.101431ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-502860 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-502860 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [12b98569-c1bd-4011-91d2-17e5cd6a7451] Pending
helpers_test.go:353: "task-pv-pod" [12b98569-c1bd-4011-91d2-17e5cd6a7451] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.004623157s
addons_test.go:574: (dbg) Run:  kubectl --context addons-502860 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-502860 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-502860 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-502860 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-502860 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-502860 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-502860 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [c26ad460-04f4-49d6-a3ec-ba91e11a32f3] Pending
helpers_test.go:353: "task-pv-pod-restore" [c26ad460-04f4-49d6-a3ec-ba91e11a32f3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [c26ad460-04f4-49d6-a3ec-ba91e11a32f3] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003270541s
addons_test.go:616: (dbg) Run:  kubectl --context addons-502860 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-502860 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-502860 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (260.066341ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:29.544201  319404 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:29.545071  319404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:29.545086  319404 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:29.545092  319404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:29.545360  319404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:29.545662  319404 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:29.546037  319404 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:29.546060  319404 addons.go:622] checking whether the cluster is paused
	I0110 09:16:29.546180  319404 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:29.546194  319404 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:29.546764  319404 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:29.564568  319404 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:29.564645  319404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:29.583475  319404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:29.691170  319404 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:29.691259  319404 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:29.720852  319404 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:29.720875  319404 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:29.720881  319404 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:29.720885  319404 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:29.720888  319404 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:29.720892  319404 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:29.720897  319404 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:29.720908  319404 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:29.720912  319404 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:29.720922  319404 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:29.720926  319404 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:29.720929  319404 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:29.720931  319404 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:29.720935  319404 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:29.720943  319404 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:29.720948  319404 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:29.720951  319404 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:29.720954  319404 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:29.720957  319404 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:29.720960  319404 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:29.720965  319404 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:29.720968  319404 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:29.720971  319404 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:29.720973  319404 cri.go:96] found id: ""
	I0110 09:16:29.721023  319404 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:29.736216  319404 out.go:203] 
	W0110 09:16:29.739071  319404 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:29.739103  319404 out.go:285] * 
	* 
	W0110 09:16:29.742347  319404 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:29.745393  319404 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (333.056214ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:16:29.806143  319446 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:16:29.807012  319446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:29.807033  319446 out.go:374] Setting ErrFile to fd 2...
	I0110 09:16:29.807039  319446 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:16:29.807328  319446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:16:29.807648  319446 mustload.go:66] Loading cluster: addons-502860
	I0110 09:16:29.808038  319446 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:29.808061  319446 addons.go:622] checking whether the cluster is paused
	I0110 09:16:29.808170  319446 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:16:29.808184  319446 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:16:29.808786  319446 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:16:29.827792  319446 ssh_runner.go:195] Run: systemctl --version
	I0110 09:16:29.827847  319446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:16:29.845270  319446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:16:29.947110  319446 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:16:29.947257  319446 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:16:29.995812  319446 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:16:29.995844  319446 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:16:29.995852  319446 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:16:29.995856  319446 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:16:29.995859  319446 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:16:29.995863  319446 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:16:29.995867  319446 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:16:29.995881  319446 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:16:29.995887  319446 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:16:29.995894  319446 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:16:29.995902  319446 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:16:29.995906  319446 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:16:29.995909  319446 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:16:29.995912  319446 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:16:29.995929  319446 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:16:29.995935  319446 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:16:29.995939  319446 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:16:29.995945  319446 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:16:29.995948  319446 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:16:29.995952  319446 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:16:29.995957  319446 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:16:29.995961  319446 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:16:29.995964  319446 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:16:29.995967  319446 cri.go:96] found id: ""
	I0110 09:16:29.996042  319446 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:16:30.051267  319446 out.go:203] 
	W0110 09:16:30.054280  319446 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:16:30.054318  319446 out.go:285] * 
	* 
	W0110 09:16:30.068349  319446 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:16:30.072768  319446 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (31.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-502860 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-502860 --alsologtostderr -v=1: exit status 11 (277.695194ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:33.324462  316917 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:33.325294  316917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:33.325314  316917 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:33.325321  316917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:33.325699  316917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:33.326099  316917 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:33.326565  316917 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:33.326594  316917 addons.go:622] checking whether the cluster is paused
	I0110 09:15:33.326749  316917 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:33.326768  316917 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:33.327382  316917 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:33.345911  316917 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:33.345979  316917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:33.365097  316917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:33.471231  316917 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:33.471315  316917 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:33.504582  316917 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:33.504606  316917 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:33.504612  316917 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:33.504615  316917 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:33.504619  316917 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:33.504622  316917 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:33.504625  316917 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:33.504628  316917 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:33.504631  316917 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:33.504636  316917 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:33.504640  316917 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:33.504643  316917 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:33.504646  316917 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:33.504649  316917 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:33.504653  316917 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:33.504661  316917 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:33.504664  316917 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:33.504669  316917 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:33.504678  316917 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:33.504681  316917 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:33.504686  316917 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:33.504691  316917 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:33.504694  316917 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:33.504697  316917 cri.go:96] found id: ""
	I0110 09:15:33.504748  316917 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:33.525804  316917 out.go:203] 
	W0110 09:15:33.528762  316917 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:33.528786  316917 out.go:285] * 
	* 
	W0110 09:15:33.532034  316917 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:33.535257  316917 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-502860 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-502860
helpers_test.go:244: (dbg) docker inspect addons-502860:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f",
	        "Created": "2026-01-10T09:13:38.213384225Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311057,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T09:13:38.298061523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f/hostname",
	        "HostsPath": "/var/lib/docker/containers/564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f/hosts",
	        "LogPath": "/var/lib/docker/containers/564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f/564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f-json.log",
	        "Name": "/addons-502860",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-502860:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-502860",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "564c954ea5dee0ff837b415a9231ea656fab82784badb44757f0f6497f36bb1f",
	                "LowerDir": "/var/lib/docker/overlay2/942874c0b84a23c6d42fea168c50f22e88a2b59298060d233630f5a18f8d209c-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/942874c0b84a23c6d42fea168c50f22e88a2b59298060d233630f5a18f8d209c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/942874c0b84a23c6d42fea168c50f22e88a2b59298060d233630f5a18f8d209c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/942874c0b84a23c6d42fea168c50f22e88a2b59298060d233630f5a18f8d209c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-502860",
	                "Source": "/var/lib/docker/volumes/addons-502860/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-502860",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-502860",
	                "name.minikube.sigs.k8s.io": "addons-502860",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "514713b92f752e37d5969ecb732ea1fee9f19dc693dd8b63a44c78abd1ad68fa",
	            "SandboxKey": "/var/run/docker/netns/514713b92f75",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-502860": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:17:44:78:64:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c65c64e73f888e1b589fce745dd36cf583c0d78f3cac6169c4868926afd6dbb",
	                    "EndpointID": "e0f90e840597b4da375688e6c0410ab6c869f25c3b83318393efc772d697dbad",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-502860",
	                        "564c954ea5de"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-502860 -n addons-502860
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-502860 logs -n 25: (1.538726415s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-314745 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-314745   │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ delete  │ -p download-only-314745                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-314745   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ start   │ -o=json --download-only -p download-only-343990 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-343990   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ delete  │ -p download-only-343990                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-343990   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ delete  │ -p download-only-314745                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-314745   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ delete  │ -p download-only-343990                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-343990   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ start   │ --download-only -p download-docker-672733 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-672733 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │                     │
	│ delete  │ -p download-docker-672733                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-672733 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ start   │ --download-only -p binary-mirror-661590 --alsologtostderr --binary-mirror http://127.0.0.1:44881 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-661590   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │                     │
	│ delete  │ -p binary-mirror-661590                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-661590   │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ addons  │ disable dashboard -p addons-502860                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-502860          │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-502860                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-502860          │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │                     │
	│ start   │ -p addons-502860 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-502860          │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:15 UTC │
	│ addons  │ addons-502860 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-502860          │ jenkins │ v1.37.0 │ 10 Jan 26 09:15 UTC │                     │
	│ addons  │ addons-502860 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-502860          │ jenkins │ v1.37.0 │ 10 Jan 26 09:15 UTC │                     │
	│ addons  │ enable headlamp -p addons-502860 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-502860          │ jenkins │ v1.37.0 │ 10 Jan 26 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:13:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:13:12.845640  310656 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:13:12.845810  310656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:13:12.845845  310656 out.go:374] Setting ErrFile to fd 2...
	I0110 09:13:12.845860  310656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:13:12.846118  310656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:13:12.846553  310656 out.go:368] Setting JSON to false
	I0110 09:13:12.847315  310656 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6942,"bootTime":1768029451,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:13:12.847383  310656 start.go:143] virtualization:  
	I0110 09:13:12.850691  310656 out.go:179] * [addons-502860] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:13:12.854498  310656 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:13:12.854602  310656 notify.go:221] Checking for updates...
	I0110 09:13:12.860172  310656 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:13:12.863033  310656 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:13:12.865858  310656 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:13:12.868637  310656 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:13:12.871467  310656 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:13:12.874557  310656 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:13:12.908569  310656 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:13:12.908684  310656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:13:12.962333  310656 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-10 09:13:12.953672122 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:13:12.962439  310656 docker.go:319] overlay module found
	I0110 09:13:12.965534  310656 out.go:179] * Using the docker driver based on user configuration
	I0110 09:13:12.968283  310656 start.go:309] selected driver: docker
	I0110 09:13:12.968297  310656 start.go:928] validating driver "docker" against <nil>
	I0110 09:13:12.968311  310656 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:13:12.969060  310656 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:13:13.022993  310656 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-10 09:13:13.014042712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:13:13.023160  310656 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:13:13.023403  310656 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 09:13:13.026241  310656 out.go:179] * Using Docker driver with root privileges
	I0110 09:13:13.029048  310656 cni.go:84] Creating CNI manager for ""
	I0110 09:13:13.029116  310656 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:13:13.029130  310656 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:13:13.029212  310656 start.go:353] cluster config:
	{Name:addons-502860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-502860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:13:13.032584  310656 out.go:179] * Starting "addons-502860" primary control-plane node in "addons-502860" cluster
	I0110 09:13:13.035456  310656 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:13:13.038547  310656 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:13:13.041439  310656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:13:13.041493  310656 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:13:13.041508  310656 cache.go:65] Caching tarball of preloaded images
	I0110 09:13:13.041507  310656 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:13:13.041618  310656 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 09:13:13.041628  310656 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 09:13:13.042018  310656 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/config.json ...
	I0110 09:13:13.042040  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/config.json: {Name:mkcb4b8ad659f0c543dee56d07bc12d7e383fb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:13.057592  310656 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 09:13:13.057735  310656 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 09:13:13.057755  310656 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 09:13:13.057767  310656 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 09:13:13.057773  310656 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 09:13:13.057779  310656 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from local cache
	I0110 09:13:31.166383  310656 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from cached tarball
	I0110 09:13:31.166436  310656 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:13:31.166480  310656 start.go:360] acquireMachinesLock for addons-502860: {Name:mk81a5ce838651fae308c890d50099f4a0c02bce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:13:31.167275  310656 start.go:364] duration metric: took 765.901µs to acquireMachinesLock for "addons-502860"
	I0110 09:13:31.167313  310656 start.go:93] Provisioning new machine with config: &{Name:addons-502860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-502860 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:13:31.167403  310656 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:13:31.170857  310656 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0110 09:13:31.171102  310656 start.go:159] libmachine.API.Create for "addons-502860" (driver="docker")
	I0110 09:13:31.171142  310656 client.go:173] LocalClient.Create starting
	I0110 09:13:31.171260  310656 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 09:13:31.275742  310656 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 09:13:31.883291  310656 cli_runner.go:164] Run: docker network inspect addons-502860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:13:31.899731  310656 cli_runner.go:211] docker network inspect addons-502860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:13:31.899820  310656 network_create.go:284] running [docker network inspect addons-502860] to gather additional debugging logs...
	I0110 09:13:31.899847  310656 cli_runner.go:164] Run: docker network inspect addons-502860
	W0110 09:13:31.916335  310656 cli_runner.go:211] docker network inspect addons-502860 returned with exit code 1
	I0110 09:13:31.916363  310656 network_create.go:287] error running [docker network inspect addons-502860]: docker network inspect addons-502860: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-502860 not found
	I0110 09:13:31.916376  310656 network_create.go:289] output of [docker network inspect addons-502860]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-502860 not found
	
	** /stderr **
	I0110 09:13:31.916486  310656 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:13:31.932946  310656 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c04150}
	I0110 09:13:31.932986  310656 network_create.go:124] attempt to create docker network addons-502860 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0110 09:13:31.933046  310656 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-502860 addons-502860
	I0110 09:13:31.991253  310656 network_create.go:108] docker network addons-502860 192.168.49.0/24 created
	I0110 09:13:31.991302  310656 kic.go:121] calculated static IP "192.168.49.2" for the "addons-502860" container
	I0110 09:13:31.991375  310656 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:13:32.011209  310656 cli_runner.go:164] Run: docker volume create addons-502860 --label name.minikube.sigs.k8s.io=addons-502860 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:13:32.030671  310656 oci.go:103] Successfully created a docker volume addons-502860
	I0110 09:13:32.030787  310656 cli_runner.go:164] Run: docker run --rm --name addons-502860-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-502860 --entrypoint /usr/bin/test -v addons-502860:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:13:34.293265  310656 cli_runner.go:217] Completed: docker run --rm --name addons-502860-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-502860 --entrypoint /usr/bin/test -v addons-502860:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (2.262436962s)
	I0110 09:13:34.293311  310656 oci.go:107] Successfully prepared a docker volume addons-502860
	I0110 09:13:34.293356  310656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:13:34.293371  310656 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:13:34.293433  310656 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-502860:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:13:38.130312  310656 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-502860:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.836818226s)
	I0110 09:13:38.130346  310656 kic.go:203] duration metric: took 3.836971467s to extract preloaded images to volume ...
	W0110 09:13:38.130491  310656 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:13:38.130603  310656 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:13:38.197894  310656 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-502860 --name addons-502860 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-502860 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-502860 --network addons-502860 --ip 192.168.49.2 --volume addons-502860:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:13:38.526044  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Running}}
	I0110 09:13:38.550025  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:13:38.575243  310656 cli_runner.go:164] Run: docker exec addons-502860 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:13:38.638869  310656 oci.go:144] the created container "addons-502860" has a running status.
	I0110 09:13:38.638901  310656 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa...
	I0110 09:13:39.405163  310656 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:13:39.434503  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:13:39.450849  310656 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:13:39.450869  310656 kic_runner.go:114] Args: [docker exec --privileged addons-502860 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:13:39.493354  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:13:39.511844  310656 machine.go:94] provisionDockerMachine start ...
	I0110 09:13:39.511937  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:39.530417  310656 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:39.530763  310656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0110 09:13:39.530782  310656 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:13:39.531483  310656 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 09:13:42.680138  310656 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-502860
	
	I0110 09:13:42.680163  310656 ubuntu.go:182] provisioning hostname "addons-502860"
	I0110 09:13:42.680257  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:42.697861  310656 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:42.698169  310656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0110 09:13:42.698188  310656 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-502860 && echo "addons-502860" | sudo tee /etc/hostname
	I0110 09:13:42.858103  310656 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-502860
	
	I0110 09:13:42.858181  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:42.875913  310656 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:42.876222  310656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0110 09:13:42.876243  310656 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-502860' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-502860/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-502860' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:13:43.024876  310656 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:13:43.024901  310656 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 09:13:43.024930  310656 ubuntu.go:190] setting up certificates
	I0110 09:13:43.024940  310656 provision.go:84] configureAuth start
	I0110 09:13:43.025008  310656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-502860
	I0110 09:13:43.041849  310656 provision.go:143] copyHostCerts
	I0110 09:13:43.041933  310656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 09:13:43.042061  310656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 09:13:43.042123  310656 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 09:13:43.042175  310656 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.addons-502860 san=[127.0.0.1 192.168.49.2 addons-502860 localhost minikube]
	I0110 09:13:43.213948  310656 provision.go:177] copyRemoteCerts
	I0110 09:13:43.214045  310656 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:13:43.214097  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:43.231560  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:13:43.337953  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 09:13:43.357224  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0110 09:13:43.375436  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 09:13:43.392894  310656 provision.go:87] duration metric: took 367.931096ms to configureAuth
	I0110 09:13:43.392921  310656 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:13:43.393124  310656 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:13:43.393248  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:43.410681  310656 main.go:144] libmachine: Using SSH client type: native
	I0110 09:13:43.411019  310656 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I0110 09:13:43.411035  310656 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 09:13:43.721181  310656 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 09:13:43.721202  310656 machine.go:97] duration metric: took 4.209338608s to provisionDockerMachine
	I0110 09:13:43.721214  310656 client.go:176] duration metric: took 12.550060483s to LocalClient.Create
	I0110 09:13:43.721233  310656 start.go:167] duration metric: took 12.550131466s to libmachine.API.Create "addons-502860"
	I0110 09:13:43.721240  310656 start.go:293] postStartSetup for "addons-502860" (driver="docker")
	I0110 09:13:43.721254  310656 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:13:43.721325  310656 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:13:43.721372  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:43.739139  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:13:43.840188  310656 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:13:43.843231  310656 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:13:43.843263  310656 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:13:43.843275  310656 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 09:13:43.843341  310656 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 09:13:43.843368  310656 start.go:296] duration metric: took 122.118663ms for postStartSetup
	I0110 09:13:43.843676  310656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-502860
	I0110 09:13:43.860112  310656 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/config.json ...
	I0110 09:13:43.860391  310656 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:13:43.860444  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:43.878309  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:13:43.977457  310656 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:13:43.982060  310656 start.go:128] duration metric: took 12.814641788s to createHost
	I0110 09:13:43.982083  310656 start.go:83] releasing machines lock for "addons-502860", held for 12.814792402s
	I0110 09:13:43.982156  310656 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-502860
	I0110 09:13:44.000295  310656 ssh_runner.go:195] Run: cat /version.json
	I0110 09:13:44.000320  310656 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:13:44.000363  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:44.000386  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:13:44.026391  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:13:44.038579  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:13:44.242170  310656 ssh_runner.go:195] Run: systemctl --version
	I0110 09:13:44.248656  310656 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 09:13:44.287208  310656 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:13:44.291367  310656 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:13:44.291438  310656 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:13:44.319295  310656 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:13:44.319324  310656 start.go:496] detecting cgroup driver to use...
	I0110 09:13:44.319368  310656 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 09:13:44.319430  310656 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 09:13:44.337139  310656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 09:13:44.349790  310656 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:13:44.349873  310656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:13:44.367747  310656 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:13:44.386509  310656 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:13:44.503791  310656 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:13:44.627365  310656 docker.go:234] disabling docker service ...
	I0110 09:13:44.627452  310656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:13:44.653169  310656 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:13:44.666549  310656 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:13:44.777869  310656 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:13:44.892411  310656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:13:44.905470  310656 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:13:44.919417  310656 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 09:13:44.919494  310656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.928176  310656 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 09:13:44.928327  310656 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.936912  310656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.945821  310656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.954286  310656 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:13:44.962352  310656 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.971084  310656 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.985068  310656 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:13:44.993573  310656 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:13:45.008313  310656 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:13:45.025343  310656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:13:45.205678  310656 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 09:13:45.391568  310656 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 09:13:45.391652  310656 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 09:13:45.395375  310656 start.go:574] Will wait 60s for crictl version
	I0110 09:13:45.395444  310656 ssh_runner.go:195] Run: which crictl
	I0110 09:13:45.398888  310656 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:13:45.426435  310656 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 09:13:45.426550  310656 ssh_runner.go:195] Run: crio --version
	I0110 09:13:45.455499  310656 ssh_runner.go:195] Run: crio --version
	I0110 09:13:45.486499  310656 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 09:13:45.489183  310656 cli_runner.go:164] Run: docker network inspect addons-502860 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:13:45.504905  310656 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0110 09:13:45.508880  310656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:13:45.519446  310656 kubeadm.go:884] updating cluster {Name:addons-502860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-502860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:13:45.519563  310656 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:13:45.519616  310656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:13:45.556151  310656 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:13:45.556181  310656 crio.go:433] Images already preloaded, skipping extraction
	I0110 09:13:45.556239  310656 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:13:45.582391  310656 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:13:45.582416  310656 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:13:45.582424  310656 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I0110 09:13:45.582514  310656 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-502860 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-502860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:13:45.582598  310656 ssh_runner.go:195] Run: crio config
	I0110 09:13:45.652627  310656 cni.go:84] Creating CNI manager for ""
	I0110 09:13:45.652654  310656 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:13:45.652678  310656 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:13:45.652717  310656 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-502860 NodeName:addons-502860 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:13:45.652869  310656 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-502860"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:13:45.652964  310656 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:13:45.661246  310656 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:13:45.661386  310656 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:13:45.669730  310656 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0110 09:13:45.683170  310656 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:13:45.696299  310656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0110 09:13:45.709909  310656 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:13:45.713652  310656 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:13:45.723640  310656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:13:45.842436  310656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:13:45.859636  310656 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860 for IP: 192.168.49.2
	I0110 09:13:45.859719  310656 certs.go:195] generating shared ca certs ...
	I0110 09:13:45.859749  310656 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:45.859936  310656 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 09:13:46.323905  310656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt ...
	I0110 09:13:46.323942  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt: {Name:mk04bdf71d4c58a1334fb81a086320862d72e0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.324761  310656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key ...
	I0110 09:13:46.324777  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key: {Name:mk7360b6920d9beb783703ff9757e8445ef3eda1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.325506  310656 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 09:13:46.519519  310656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt ...
	I0110 09:13:46.519549  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt: {Name:mke213b379c00081251d9ac12cd6aecc9a753130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.520352  310656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key ...
	I0110 09:13:46.520368  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key: {Name:mkd508343cd78e58bb87deb299196ddf87bf0135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.520449  310656 certs.go:257] generating profile certs ...
	I0110 09:13:46.520534  310656 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.key
	I0110 09:13:46.520551  310656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt with IP's: []
	I0110 09:13:46.714509  310656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt ...
	I0110 09:13:46.714539  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: {Name:mk668dc782515979b7933d684a3968c3281da51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.715354  310656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.key ...
	I0110 09:13:46.715368  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.key: {Name:mk6982bbf59e8636a118fc5892a6db61435ee64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.716050  310656 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.key.596450cf
	I0110 09:13:46.716072  310656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.crt.596450cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0110 09:13:46.753917  310656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.crt.596450cf ...
	I0110 09:13:46.753942  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.crt.596450cf: {Name:mk83f6c410df60738e6046f7d808bc6183a04f3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.754731  310656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.key.596450cf ...
	I0110 09:13:46.754747  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.key.596450cf: {Name:mkf25e225166039f61111627f24c6c117cb06366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:46.754844  310656 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.crt.596450cf -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.crt
	I0110 09:13:46.754937  310656 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.key.596450cf -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.key
	I0110 09:13:46.754994  310656 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.key
	I0110 09:13:46.755015  310656 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.crt with IP's: []
	I0110 09:13:47.118233  310656 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.crt ...
	I0110 09:13:47.118267  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.crt: {Name:mkf9df89f868c0a56c794849059dd52df6d7530d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:47.119028  310656 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.key ...
	I0110 09:13:47.119051  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.key: {Name:mk40500a760261be45988194f161789d00aaea42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:47.119272  310656 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:13:47.119325  310656 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 09:13:47.119353  310656 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:13:47.119389  310656 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 09:13:47.120035  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:13:47.138901  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 09:13:47.156450  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:13:47.173869  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:13:47.191204  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 09:13:47.207950  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 09:13:47.224653  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:13:47.241708  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 09:13:47.257964  310656 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:13:47.274183  310656 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:13:47.287190  310656 ssh_runner.go:195] Run: openssl version
	I0110 09:13:47.293523  310656 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:47.300699  310656 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:13:47.307915  310656 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:47.311400  310656 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:47.311508  310656 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:13:47.352174  310656 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:13:47.359286  310656 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:13:47.366319  310656 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:13:47.369753  310656 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:13:47.369809  310656 kubeadm.go:401] StartCluster: {Name:addons-502860 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-502860 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:13:47.369894  310656 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:13:47.369949  310656 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:13:47.396481  310656 cri.go:96] found id: ""
	I0110 09:13:47.396626  310656 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:13:47.406066  310656 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:13:47.413781  310656 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:13:47.413885  310656 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:13:47.423761  310656 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:13:47.423822  310656 kubeadm.go:158] found existing configuration files:
	
	I0110 09:13:47.423887  310656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:13:47.431760  310656 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:13:47.431868  310656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:13:47.439010  310656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:13:47.447048  310656 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:13:47.447228  310656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:13:47.454442  310656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:13:47.462330  310656 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:13:47.462440  310656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:13:47.469538  310656 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:13:47.477066  310656 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:13:47.477177  310656 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:13:47.484655  310656 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:13:47.527060  310656 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:13:47.527277  310656 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:13:47.593608  310656 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:13:47.593684  310656 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:13:47.593725  310656 kubeadm.go:319] OS: Linux
	I0110 09:13:47.593788  310656 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:13:47.593841  310656 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:13:47.593892  310656 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:13:47.593945  310656 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:13:47.593999  310656 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:13:47.594050  310656 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:13:47.594104  310656 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:13:47.594154  310656 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:13:47.594204  310656 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:13:47.662241  310656 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:13:47.662374  310656 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:13:47.662489  310656 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:13:47.669882  310656 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:13:47.675687  310656 out.go:252]   - Generating certificates and keys ...
	I0110 09:13:47.675876  310656 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:13:47.675955  310656 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:13:47.917035  310656 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 09:13:48.062204  310656 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 09:13:48.146350  310656 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 09:13:48.264741  310656 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 09:13:48.548464  310656 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 09:13:48.548651  310656 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-502860 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 09:13:48.795138  310656 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 09:13:48.795498  310656 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-502860 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 09:13:48.935455  310656 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 09:13:49.113182  310656 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 09:13:49.448399  310656 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 09:13:49.448702  310656 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:13:49.825046  310656 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:13:50.008077  310656 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:13:50.131765  310656 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:13:50.607743  310656 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:13:50.723822  310656 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:13:50.724595  310656 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:13:50.727618  310656 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:13:50.731216  310656 out.go:252]   - Booting up control plane ...
	I0110 09:13:50.731325  310656 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:13:50.731426  310656 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:13:50.732799  310656 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:13:50.748330  310656 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:13:50.748619  310656 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:13:50.756354  310656 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:13:50.756777  310656 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:13:50.757047  310656 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:13:50.892941  310656 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:13:50.893069  310656 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:13:51.889981  310656 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001745474s
	I0110 09:13:51.893700  310656 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 09:13:51.893853  310656 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0110 09:13:51.894314  310656 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 09:13:51.894404  310656 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 09:13:53.404442  310656 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.510203029s
	I0110 09:13:54.930558  310656 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.036783532s
	I0110 09:13:56.895743  310656 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001802403s
	I0110 09:13:56.938314  310656 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 09:13:56.954220  310656 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 09:13:56.972127  310656 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 09:13:56.972363  310656 kubeadm.go:319] [mark-control-plane] Marking the node addons-502860 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 09:13:56.984629  310656 kubeadm.go:319] [bootstrap-token] Using token: u63e1t.ofi50smb9mglnrrv
	I0110 09:13:56.987643  310656 out.go:252]   - Configuring RBAC rules ...
	I0110 09:13:56.987766  310656 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 09:13:56.993795  310656 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 09:13:57.003167  310656 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 09:13:57.008090  310656 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 09:13:57.012553  310656 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 09:13:57.020730  310656 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 09:13:57.303622  310656 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 09:13:57.771860  310656 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 09:13:58.303670  310656 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 09:13:58.304979  310656 kubeadm.go:319] 
	I0110 09:13:58.305052  310656 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 09:13:58.305062  310656 kubeadm.go:319] 
	I0110 09:13:58.305136  310656 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 09:13:58.305144  310656 kubeadm.go:319] 
	I0110 09:13:58.305169  310656 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 09:13:58.305228  310656 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 09:13:58.305279  310656 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 09:13:58.305287  310656 kubeadm.go:319] 
	I0110 09:13:58.305338  310656 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 09:13:58.305346  310656 kubeadm.go:319] 
	I0110 09:13:58.305391  310656 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 09:13:58.305399  310656 kubeadm.go:319] 
	I0110 09:13:58.305448  310656 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 09:13:58.305522  310656 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 09:13:58.305594  310656 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 09:13:58.305602  310656 kubeadm.go:319] 
	I0110 09:13:58.305682  310656 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 09:13:58.305758  310656 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 09:13:58.305766  310656 kubeadm.go:319] 
	I0110 09:13:58.305845  310656 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token u63e1t.ofi50smb9mglnrrv \
	I0110 09:13:58.305947  310656 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e \
	I0110 09:13:58.305969  310656 kubeadm.go:319] 	--control-plane 
	I0110 09:13:58.305978  310656 kubeadm.go:319] 
	I0110 09:13:58.306066  310656 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 09:13:58.306076  310656 kubeadm.go:319] 
	I0110 09:13:58.306178  310656 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token u63e1t.ofi50smb9mglnrrv \
	I0110 09:13:58.306281  310656 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e 
	I0110 09:13:58.309054  310656 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:13:58.309478  310656 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:13:58.309593  310656 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:13:58.309635  310656 cni.go:84] Creating CNI manager for ""
	I0110 09:13:58.309647  310656 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:13:58.314638  310656 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 09:13:58.317564  310656 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 09:13:58.323409  310656 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 09:13:58.323428  310656 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 09:13:58.338447  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 09:13:58.635351  310656 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 09:13:58.635471  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:58.635556  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-502860 minikube.k8s.io/updated_at=2026_01_10T09_13_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=addons-502860 minikube.k8s.io/primary=true
	I0110 09:13:58.895894  310656 ops.go:34] apiserver oom_adj: -16
	I0110 09:13:58.896019  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:59.396140  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:13:59.896360  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:14:00.397133  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:14:00.896286  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:14:01.396192  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:14:01.896157  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:14:02.396604  310656 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:14:02.500675  310656 kubeadm.go:1114] duration metric: took 3.865246205s to wait for elevateKubeSystemPrivileges
	I0110 09:14:02.500709  310656 kubeadm.go:403] duration metric: took 15.130911432s to StartCluster
	I0110 09:14:02.500728  310656 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:14:02.501470  310656 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:14:02.501868  310656 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:14:02.502083  310656 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:14:02.502211  310656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 09:14:02.502460  310656 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:14:02.502498  310656 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0110 09:14:02.502581  310656 addons.go:70] Setting yakd=true in profile "addons-502860"
	I0110 09:14:02.502601  310656 addons.go:239] Setting addon yakd=true in "addons-502860"
	I0110 09:14:02.502625  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.503103  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.503705  310656 addons.go:70] Setting metrics-server=true in profile "addons-502860"
	I0110 09:14:02.503716  310656 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-502860"
	I0110 09:14:02.503731  310656 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-502860"
	I0110 09:14:02.503740  310656 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-502860"
	I0110 09:14:02.503750  310656 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-502860"
	I0110 09:14:02.503775  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.503778  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.504194  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.504211  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.506735  310656 addons.go:70] Setting cloud-spanner=true in profile "addons-502860"
	I0110 09:14:02.506820  310656 addons.go:239] Setting addon cloud-spanner=true in "addons-502860"
	I0110 09:14:02.506929  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.507706  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.507049  310656 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-502860"
	I0110 09:14:02.514214  310656 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-502860"
	I0110 09:14:02.514269  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.514848  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.507059  310656 addons.go:70] Setting default-storageclass=true in profile "addons-502860"
	I0110 09:14:02.520948  310656 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-502860"
	I0110 09:14:02.521281  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.507063  310656 addons.go:70] Setting gcp-auth=true in profile "addons-502860"
	I0110 09:14:02.541675  310656 mustload.go:66] Loading cluster: addons-502860
	I0110 09:14:02.541980  310656 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:14:02.543121  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.507066  310656 addons.go:70] Setting ingress=true in profile "addons-502860"
	I0110 09:14:02.507069  310656 addons.go:70] Setting ingress-dns=true in profile "addons-502860"
	I0110 09:14:02.507073  310656 addons.go:70] Setting inspektor-gadget=true in profile "addons-502860"
	I0110 09:14:02.507084  310656 out.go:179] * Verifying Kubernetes components...
	I0110 09:14:02.507100  310656 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-502860"
	I0110 09:14:02.507105  310656 addons.go:70] Setting registry=true in profile "addons-502860"
	I0110 09:14:02.507108  310656 addons.go:70] Setting registry-creds=true in profile "addons-502860"
	I0110 09:14:02.507118  310656 addons.go:70] Setting storage-provisioner=true in profile "addons-502860"
	I0110 09:14:02.507126  310656 addons.go:70] Setting volumesnapshots=true in profile "addons-502860"
	I0110 09:14:02.507131  310656 addons.go:70] Setting volcano=true in profile "addons-502860"
	I0110 09:14:02.503726  310656 addons.go:239] Setting addon metrics-server=true in "addons-502860"
	I0110 09:14:02.549695  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.550222  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.566004  310656 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-502860"
	I0110 09:14:02.566511  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.572613  310656 addons.go:239] Setting addon registry=true in "addons-502860"
	I0110 09:14:02.572720  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.573269  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.587765  310656 addons.go:239] Setting addon registry-creds=true in "addons-502860"
	I0110 09:14:02.587880  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.588534  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.600594  310656 addons.go:239] Setting addon ingress=true in "addons-502860"
	I0110 09:14:02.600666  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.601135  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.601349  310656 addons.go:239] Setting addon storage-provisioner=true in "addons-502860"
	I0110 09:14:02.601416  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.601990  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.625565  310656 addons.go:239] Setting addon ingress-dns=true in "addons-502860"
	I0110 09:14:02.625622  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.626108  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.636668  310656 addons.go:239] Setting addon volumesnapshots=true in "addons-502860"
	I0110 09:14:02.636785  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.637293  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.647942  310656 addons.go:239] Setting addon inspektor-gadget=true in "addons-502860"
	I0110 09:14:02.647992  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.648485  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.656739  310656 addons.go:239] Setting addon volcano=true in "addons-502860"
	I0110 09:14:02.656810  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.657460  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.679109  310656 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:14:02.693300  310656 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I0110 09:14:02.700598  310656 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 09:14:02.700624  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0110 09:14:02.700716  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.759331  310656 addons.go:239] Setting addon default-storageclass=true in "addons-502860"
	I0110 09:14:02.759418  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.801083  310656 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I0110 09:14:02.808099  310656 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I0110 09:14:02.808424  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0110 09:14:02.808616  310656 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0110 09:14:02.808785  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.820666  310656 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 09:14:02.820843  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0110 09:14:02.821045  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.844301  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0110 09:14:02.857951  310656 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 09:14:02.859531  310656 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0110 09:14:02.874899  310656 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I0110 09:14:02.879100  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.874818  310656 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 09:14:02.879290  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 09:14:02.879365  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.893247  310656 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0110 09:14:02.874830  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0110 09:14:02.896161  310656 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 09:14:02.896183  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0110 09:14:02.896250  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.874842  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W0110 09:14:02.901938  310656 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0110 09:14:02.904552  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0110 09:14:02.904583  310656 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I0110 09:14:02.904591  310656 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0110 09:14:02.904603  310656 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0110 09:14:02.904612  310656 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0110 09:14:02.904675  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.905712  310656 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-502860"
	I0110 09:14:02.905814  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.906247  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:02.916644  310656 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0110 09:14:02.916669  310656 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0110 09:14:02.916741  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.947701  310656 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0110 09:14:02.947730  310656 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0110 09:14:02.947798  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.961119  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:02.962278  310656 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 09:14:02.962481  310656 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.48.0
	I0110 09:14:02.963533  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:02.970736  310656 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0110 09:14:02.971145  310656 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0110 09:14:02.971167  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0110 09:14:02.971230  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:02.981689  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0110 09:14:02.981930  310656 out.go:179]   - Using image docker.io/registry:3.0.0
	I0110 09:14:02.984285  310656 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 09:14:02.984310  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0110 09:14:02.984373  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:03.007811  310656 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 09:14:03.009492  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0110 09:14:03.027769  310656 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I0110 09:14:03.035842  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0110 09:14:03.035954  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:03.052677  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.053536  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.071260  310656 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 09:14:03.074244  310656 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 09:14:03.075297  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I0110 09:14:03.075387  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:03.090484  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0110 09:14:03.090728  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.100648  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0110 09:14:03.108617  310656 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0110 09:14:03.116619  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0110 09:14:03.116646  310656 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0110 09:14:03.116726  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:03.143310  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.152606  310656 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:14:03.156043  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.157545  310656 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 09:14:03.157562  310656 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 09:14:03.157621  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:03.167570  310656 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0110 09:14:03.173894  310656 out.go:179]   - Using image docker.io/busybox:stable
	I0110 09:14:03.177552  310656 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 09:14:03.177577  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0110 09:14:03.177646  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:03.197961  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.214607  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.222796  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.234359  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.237063  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.264056  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.285458  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.289359  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.299049  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:03.635307  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 09:14:03.659279  310656 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0110 09:14:03.659315  310656 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0110 09:14:03.777769  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 09:14:03.829818  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 09:14:03.929853  310656 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0110 09:14:03.929878  310656 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0110 09:14:03.937202  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 09:14:03.980032  310656 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0110 09:14:03.980054  310656 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0110 09:14:03.989167  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I0110 09:14:04.030714  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 09:14:04.070066  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 09:14:04.150803  310656 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I0110 09:14:04.150828  310656 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0110 09:14:04.160579  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0110 09:14:04.162999  310656 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0110 09:14:04.163038  310656 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0110 09:14:04.207441  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 09:14:04.209699  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0110 09:14:04.209722  310656 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0110 09:14:04.216462  310656 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0110 09:14:04.216484  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0110 09:14:04.218150  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 09:14:04.232367  310656 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0110 09:14:04.232394  310656 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0110 09:14:04.434852  310656 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0110 09:14:04.434886  310656 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0110 09:14:04.544985  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0110 09:14:04.545010  310656 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0110 09:14:04.579566  310656 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0110 09:14:04.579595  310656 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0110 09:14:04.583873  310656 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0110 09:14:04.583896  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0110 09:14:04.636527  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0110 09:14:04.636557  310656 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0110 09:14:04.862887  310656 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 09:14:04.862910  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0110 09:14:04.886360  310656 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0110 09:14:04.886395  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I0110 09:14:04.987736  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0110 09:14:04.987761  310656 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0110 09:14:05.029349  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0110 09:14:05.031577  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0110 09:14:05.080258  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 09:14:05.082855  310656 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 09:14:05.082884  310656 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0110 09:14:05.152590  310656 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.999939944s)
	I0110 09:14:05.152725  310656 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.190417158s)
	I0110 09:14:05.152860  310656 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0110 09:14:05.153627  310656 node_ready.go:35] waiting up to 6m0s for node "addons-502860" to be "Ready" ...
	I0110 09:14:05.337457  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0110 09:14:05.337481  310656 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0110 09:14:05.413208  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.777869077s)
	I0110 09:14:05.505079  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 09:14:05.665706  310656 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-502860" context rescaled to 1 replicas
	I0110 09:14:05.701922  310656 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0110 09:14:05.702006  310656 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0110 09:14:05.970575  310656 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0110 09:14:05.970644  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0110 09:14:06.111088  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.333281619s)
	I0110 09:14:06.111392  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.281546222s)
	I0110 09:14:06.283262  310656 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0110 09:14:06.283340  310656 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0110 09:14:06.591924  310656 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0110 09:14:06.591989  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0110 09:14:06.862726  310656 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0110 09:14:06.862795  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0110 09:14:07.152062  310656 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 09:14:07.152096  310656 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W0110 09:14:07.207169  310656 node_ready.go:57] node "addons-502860" has "Ready":"False" status (will retry)
	I0110 09:14:07.441239  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 09:14:08.080966  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.143699756s)
	I0110 09:14:08.917692  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.928488097s)
	I0110 09:14:08.917767  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.887021766s)
	I0110 09:14:08.917940  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.847851766s)
	I0110 09:14:08.917978  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.757365247s)
	I0110 09:14:08.918026  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.710564188s)
	W0110 09:14:09.661439  310656 node_ready.go:57] node "addons-502860" has "Ready":"False" status (will retry)
	I0110 09:14:09.767850  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.549658276s)
	I0110 09:14:09.767922  310656 addons.go:495] Verifying addon ingress=true in "addons-502860"
	I0110 09:14:09.768264  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.738885441s)
	I0110 09:14:09.768310  310656 addons.go:495] Verifying addon registry=true in "addons-502860"
	I0110 09:14:09.768405  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.736782949s)
	I0110 09:14:09.768668  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.688377047s)
	W0110 09:14:09.768700  310656 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 09:14:09.768736  310656 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 09:14:09.768796  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.263691985s)
	I0110 09:14:09.768809  310656 addons.go:495] Verifying addon metrics-server=true in "addons-502860"
	I0110 09:14:09.770975  310656 out.go:179] * Verifying ingress addon...
	I0110 09:14:09.773073  310656 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-502860 service yakd-dashboard -n yakd-dashboard
	
	I0110 09:14:09.773140  310656 out.go:179] * Verifying registry addon...
	I0110 09:14:09.777531  310656 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0110 09:14:09.777756  310656 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0110 09:14:09.785859  310656 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 09:14:09.785887  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:09.786027  310656 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0110 09:14:09.786041  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:10.055978  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.614678069s)
	I0110 09:14:10.056018  310656 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-502860"
	I0110 09:14:10.059096  310656 out.go:179] * Verifying csi-hostpath-driver addon...
	I0110 09:14:10.062952  310656 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0110 09:14:10.071892  310656 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 09:14:10.071920  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:10.114128  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 09:14:10.282158  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:10.288744  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:10.567148  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:10.579116  310656 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0110 09:14:10.579211  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:10.596884  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:10.723169  310656 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0110 09:14:10.742425  310656 addons.go:239] Setting addon gcp-auth=true in "addons-502860"
	I0110 09:14:10.742478  310656 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:14:10.742939  310656 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:14:10.760108  310656 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0110 09:14:10.760164  310656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:14:10.788149  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:10.792033  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:10.796351  310656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:14:11.066197  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:11.281939  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:11.281962  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:11.566420  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:11.781014  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:11.781165  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:12.066489  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0110 09:14:12.157563  310656 node_ready.go:57] node "addons-502860" has "Ready":"False" status (will retry)
	I0110 09:14:12.280751  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:12.281323  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:12.567070  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:12.782519  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:12.782599  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:12.812391  310656 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.698216047s)
	I0110 09:14:12.812405  310656 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.052267798s)
	I0110 09:14:12.815516  310656 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 09:14:12.818463  310656 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0110 09:14:12.821150  310656 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0110 09:14:12.821175  310656 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0110 09:14:12.834371  310656 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0110 09:14:12.834394  310656 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0110 09:14:12.847418  310656 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 09:14:12.847446  310656 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0110 09:14:12.862692  310656 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 09:14:13.066951  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:13.290085  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:13.290298  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:13.358816  310656 addons.go:495] Verifying addon gcp-auth=true in "addons-502860"
	I0110 09:14:13.361841  310656 out.go:179] * Verifying gcp-auth addon...
	I0110 09:14:13.365372  310656 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0110 09:14:13.368218  310656 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0110 09:14:13.368281  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:13.566930  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:13.780413  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:13.780835  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:13.868666  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:14.066669  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:14.280646  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:14.280892  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:14.368767  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:14.565663  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0110 09:14:14.657368  310656 node_ready.go:57] node "addons-502860" has "Ready":"False" status (will retry)
	I0110 09:14:14.781213  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:14.781506  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:14.868453  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:15.066788  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:15.281298  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:15.281779  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:15.368580  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:15.566903  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:15.781264  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:15.781362  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:15.869357  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:16.066233  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:16.281364  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:16.281891  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:16.368476  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:16.566969  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:16.782217  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:16.782282  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:16.870360  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:17.066038  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0110 09:14:17.159766  310656 node_ready.go:57] node "addons-502860" has "Ready":"False" status (will retry)
	I0110 09:14:17.280721  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:17.281358  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:17.373987  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:17.596625  310656 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 09:14:17.596650  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:17.659400  310656 node_ready.go:49] node "addons-502860" is "Ready"
	I0110 09:14:17.659434  310656 node_ready.go:38] duration metric: took 12.505744494s for node "addons-502860" to be "Ready" ...
	I0110 09:14:17.659450  310656 api_server.go:52] waiting for apiserver process to appear ...
	I0110 09:14:17.659510  310656 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:14:17.683603  310656 api_server.go:72] duration metric: took 15.181482867s to wait for apiserver process to appear ...
	I0110 09:14:17.683631  310656 api_server.go:88] waiting for apiserver healthz status ...
	I0110 09:14:17.683649  310656 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0110 09:14:17.691918  310656 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0110 09:14:17.693796  310656 api_server.go:141] control plane version: v1.35.0
	I0110 09:14:17.693825  310656 api_server.go:131] duration metric: took 10.18748ms to wait for apiserver health ...
	I0110 09:14:17.693835  310656 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 09:14:17.706561  310656 system_pods.go:59] 19 kube-system pods found
	I0110 09:14:17.706598  310656 system_pods.go:61] "coredns-7d764666f9-ldt9g" [084339d5-8bac-4c1e-8f7a-aca06ce0459e] Pending
	I0110 09:14:17.706605  310656 system_pods.go:61] "csi-hostpath-attacher-0" [85991290-9605-4f0c-ac87-6c0ab46ebdc1] Pending
	I0110 09:14:17.706615  310656 system_pods.go:61] "csi-hostpath-resizer-0" [a0c1f4c0-6082-49dd-a620-f43f004badfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 09:14:17.706622  310656 system_pods.go:61] "csi-hostpathplugin-cxkkx" [9a7fdae4-42a2-4048-b570-267c1d1ea151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 09:14:17.706631  310656 system_pods.go:61] "etcd-addons-502860" [d12eaac2-1664-45e0-b86b-de568dc7f737] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 09:14:17.706642  310656 system_pods.go:61] "kindnet-mjsdt" [14f8bcad-0ebe-4e9d-b10f-0d01c576a5f2] Running
	I0110 09:14:17.706648  310656 system_pods.go:61] "kube-apiserver-addons-502860" [8b9124c1-4075-4ed4-9d35-3d09baaeb498] Running
	I0110 09:14:17.706655  310656 system_pods.go:61] "kube-controller-manager-addons-502860" [6dc43c39-17ad-4945-abb9-65cc13185577] Running
	I0110 09:14:17.706741  310656 system_pods.go:61] "kube-ingress-dns-minikube" [cb8c7912-b63e-40b1-8f28-fa210743a3a3] Pending
	I0110 09:14:17.706762  310656 system_pods.go:61] "kube-proxy-kqg8f" [d4b3e821-b2ca-4b08-abc4-4c02b3b0f7ad] Running
	I0110 09:14:17.706769  310656 system_pods.go:61] "kube-scheduler-addons-502860" [be32a27b-cdc7-4e93-b338-bf9b6f6874cd] Running
	I0110 09:14:17.706780  310656 system_pods.go:61] "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Pending
	I0110 09:14:17.706785  310656 system_pods.go:61] "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Pending
	I0110 09:14:17.706789  310656 system_pods.go:61] "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Pending
	I0110 09:14:17.706804  310656 system_pods.go:61] "registry-creds-567fb78d95-j77tl" [a0e51532-983e-47d2-ad4e-da6c02f070ab] Pending
	I0110 09:14:17.706809  310656 system_pods.go:61] "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Pending
	I0110 09:14:17.706816  310656 system_pods.go:61] "snapshot-controller-6588d87457-8gqxp" [7960753b-c675-43d3-bd09-935f5548adf5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:17.706824  310656 system_pods.go:61] "snapshot-controller-6588d87457-gdmgr" [d05a8993-b3a0-454a-a998-bfdb5fd5e7ac] Pending
	I0110 09:14:17.706830  310656 system_pods.go:61] "storage-provisioner" [9257aa4e-ce2c-45d7-83a6-726343c3898a] Pending
	I0110 09:14:17.706841  310656 system_pods.go:74] duration metric: took 13.000717ms to wait for pod list to return data ...
	I0110 09:14:17.706849  310656 default_sa.go:34] waiting for default service account to be created ...
	I0110 09:14:17.737200  310656 default_sa.go:45] found service account: "default"
	I0110 09:14:17.737227  310656 default_sa.go:55] duration metric: took 30.368241ms for default service account to be created ...
	I0110 09:14:17.737239  310656 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 09:14:17.753217  310656 system_pods.go:86] 19 kube-system pods found
	I0110 09:14:17.753248  310656 system_pods.go:89] "coredns-7d764666f9-ldt9g" [084339d5-8bac-4c1e-8f7a-aca06ce0459e] Pending
	I0110 09:14:17.753255  310656 system_pods.go:89] "csi-hostpath-attacher-0" [85991290-9605-4f0c-ac87-6c0ab46ebdc1] Pending
	I0110 09:14:17.753262  310656 system_pods.go:89] "csi-hostpath-resizer-0" [a0c1f4c0-6082-49dd-a620-f43f004badfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 09:14:17.753272  310656 system_pods.go:89] "csi-hostpathplugin-cxkkx" [9a7fdae4-42a2-4048-b570-267c1d1ea151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 09:14:17.753281  310656 system_pods.go:89] "etcd-addons-502860" [d12eaac2-1664-45e0-b86b-de568dc7f737] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 09:14:17.753288  310656 system_pods.go:89] "kindnet-mjsdt" [14f8bcad-0ebe-4e9d-b10f-0d01c576a5f2] Running
	I0110 09:14:17.753294  310656 system_pods.go:89] "kube-apiserver-addons-502860" [8b9124c1-4075-4ed4-9d35-3d09baaeb498] Running
	I0110 09:14:17.753300  310656 system_pods.go:89] "kube-controller-manager-addons-502860" [6dc43c39-17ad-4945-abb9-65cc13185577] Running
	I0110 09:14:17.753308  310656 system_pods.go:89] "kube-ingress-dns-minikube" [cb8c7912-b63e-40b1-8f28-fa210743a3a3] Pending
	I0110 09:14:17.753313  310656 system_pods.go:89] "kube-proxy-kqg8f" [d4b3e821-b2ca-4b08-abc4-4c02b3b0f7ad] Running
	I0110 09:14:17.753320  310656 system_pods.go:89] "kube-scheduler-addons-502860" [be32a27b-cdc7-4e93-b338-bf9b6f6874cd] Running
	I0110 09:14:17.753327  310656 system_pods.go:89] "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 09:14:17.753335  310656 system_pods.go:89] "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Pending
	I0110 09:14:17.753340  310656 system_pods.go:89] "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Pending
	I0110 09:14:17.753344  310656 system_pods.go:89] "registry-creds-567fb78d95-j77tl" [a0e51532-983e-47d2-ad4e-da6c02f070ab] Pending
	I0110 09:14:17.753349  310656 system_pods.go:89] "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Pending
	I0110 09:14:17.753356  310656 system_pods.go:89] "snapshot-controller-6588d87457-8gqxp" [7960753b-c675-43d3-bd09-935f5548adf5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:17.753367  310656 system_pods.go:89] "snapshot-controller-6588d87457-gdmgr" [d05a8993-b3a0-454a-a998-bfdb5fd5e7ac] Pending
	I0110 09:14:17.753372  310656 system_pods.go:89] "storage-provisioner" [9257aa4e-ce2c-45d7-83a6-726343c3898a] Pending
	I0110 09:14:17.753392  310656 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 09:14:17.837922  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:17.839950  310656 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 09:14:17.839971  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:17.952287  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:18.044349  310656 system_pods.go:86] 19 kube-system pods found
	I0110 09:14:18.044388  310656 system_pods.go:89] "coredns-7d764666f9-ldt9g" [084339d5-8bac-4c1e-8f7a-aca06ce0459e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:14:18.044397  310656 system_pods.go:89] "csi-hostpath-attacher-0" [85991290-9605-4f0c-ac87-6c0ab46ebdc1] Pending
	I0110 09:14:18.044405  310656 system_pods.go:89] "csi-hostpath-resizer-0" [a0c1f4c0-6082-49dd-a620-f43f004badfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 09:14:18.044412  310656 system_pods.go:89] "csi-hostpathplugin-cxkkx" [9a7fdae4-42a2-4048-b570-267c1d1ea151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 09:14:18.044420  310656 system_pods.go:89] "etcd-addons-502860" [d12eaac2-1664-45e0-b86b-de568dc7f737] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 09:14:18.044426  310656 system_pods.go:89] "kindnet-mjsdt" [14f8bcad-0ebe-4e9d-b10f-0d01c576a5f2] Running
	I0110 09:14:18.044436  310656 system_pods.go:89] "kube-apiserver-addons-502860" [8b9124c1-4075-4ed4-9d35-3d09baaeb498] Running
	I0110 09:14:18.044441  310656 system_pods.go:89] "kube-controller-manager-addons-502860" [6dc43c39-17ad-4945-abb9-65cc13185577] Running
	I0110 09:14:18.044448  310656 system_pods.go:89] "kube-ingress-dns-minikube" [cb8c7912-b63e-40b1-8f28-fa210743a3a3] Pending
	I0110 09:14:18.044452  310656 system_pods.go:89] "kube-proxy-kqg8f" [d4b3e821-b2ca-4b08-abc4-4c02b3b0f7ad] Running
	I0110 09:14:18.044464  310656 system_pods.go:89] "kube-scheduler-addons-502860" [be32a27b-cdc7-4e93-b338-bf9b6f6874cd] Running
	I0110 09:14:18.044471  310656 system_pods.go:89] "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 09:14:18.044475  310656 system_pods.go:89] "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Pending
	I0110 09:14:18.044486  310656 system_pods.go:89] "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Pending
	I0110 09:14:18.044490  310656 system_pods.go:89] "registry-creds-567fb78d95-j77tl" [a0e51532-983e-47d2-ad4e-da6c02f070ab] Pending
	I0110 09:14:18.044510  310656 system_pods.go:89] "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Pending
	I0110 09:14:18.044517  310656 system_pods.go:89] "snapshot-controller-6588d87457-8gqxp" [7960753b-c675-43d3-bd09-935f5548adf5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:18.044523  310656 system_pods.go:89] "snapshot-controller-6588d87457-gdmgr" [d05a8993-b3a0-454a-a998-bfdb5fd5e7ac] Pending
	I0110 09:14:18.044529  310656 system_pods.go:89] "storage-provisioner" [9257aa4e-ce2c-45d7-83a6-726343c3898a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 09:14:18.089409  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:18.292034  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:18.295997  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:18.371568  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:18.385932  310656 system_pods.go:86] 19 kube-system pods found
	I0110 09:14:18.385972  310656 system_pods.go:89] "coredns-7d764666f9-ldt9g" [084339d5-8bac-4c1e-8f7a-aca06ce0459e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:14:18.385981  310656 system_pods.go:89] "csi-hostpath-attacher-0" [85991290-9605-4f0c-ac87-6c0ab46ebdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 09:14:18.385989  310656 system_pods.go:89] "csi-hostpath-resizer-0" [a0c1f4c0-6082-49dd-a620-f43f004badfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 09:14:18.385996  310656 system_pods.go:89] "csi-hostpathplugin-cxkkx" [9a7fdae4-42a2-4048-b570-267c1d1ea151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 09:14:18.386004  310656 system_pods.go:89] "etcd-addons-502860" [d12eaac2-1664-45e0-b86b-de568dc7f737] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 09:14:18.386009  310656 system_pods.go:89] "kindnet-mjsdt" [14f8bcad-0ebe-4e9d-b10f-0d01c576a5f2] Running
	I0110 09:14:18.386023  310656 system_pods.go:89] "kube-apiserver-addons-502860" [8b9124c1-4075-4ed4-9d35-3d09baaeb498] Running
	I0110 09:14:18.386030  310656 system_pods.go:89] "kube-controller-manager-addons-502860" [6dc43c39-17ad-4945-abb9-65cc13185577] Running
	I0110 09:14:18.386043  310656 system_pods.go:89] "kube-ingress-dns-minikube" [cb8c7912-b63e-40b1-8f28-fa210743a3a3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 09:14:18.386048  310656 system_pods.go:89] "kube-proxy-kqg8f" [d4b3e821-b2ca-4b08-abc4-4c02b3b0f7ad] Running
	I0110 09:14:18.386053  310656 system_pods.go:89] "kube-scheduler-addons-502860" [be32a27b-cdc7-4e93-b338-bf9b6f6874cd] Running
	I0110 09:14:18.386066  310656 system_pods.go:89] "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 09:14:18.386073  310656 system_pods.go:89] "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 09:14:18.386081  310656 system_pods.go:89] "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 09:14:18.386087  310656 system_pods.go:89] "registry-creds-567fb78d95-j77tl" [a0e51532-983e-47d2-ad4e-da6c02f070ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 09:14:18.386095  310656 system_pods.go:89] "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Pending
	I0110 09:14:18.386102  310656 system_pods.go:89] "snapshot-controller-6588d87457-8gqxp" [7960753b-c675-43d3-bd09-935f5548adf5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:18.386115  310656 system_pods.go:89] "snapshot-controller-6588d87457-gdmgr" [d05a8993-b3a0-454a-a998-bfdb5fd5e7ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:18.386122  310656 system_pods.go:89] "storage-provisioner" [9257aa4e-ce2c-45d7-83a6-726343c3898a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 09:14:18.567066  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:18.782601  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:18.782768  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:18.787816  310656 system_pods.go:86] 19 kube-system pods found
	I0110 09:14:18.787854  310656 system_pods.go:89] "coredns-7d764666f9-ldt9g" [084339d5-8bac-4c1e-8f7a-aca06ce0459e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 09:14:18.787863  310656 system_pods.go:89] "csi-hostpath-attacher-0" [85991290-9605-4f0c-ac87-6c0ab46ebdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 09:14:18.787872  310656 system_pods.go:89] "csi-hostpath-resizer-0" [a0c1f4c0-6082-49dd-a620-f43f004badfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 09:14:18.787880  310656 system_pods.go:89] "csi-hostpathplugin-cxkkx" [9a7fdae4-42a2-4048-b570-267c1d1ea151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 09:14:18.787888  310656 system_pods.go:89] "etcd-addons-502860" [d12eaac2-1664-45e0-b86b-de568dc7f737] Running
	I0110 09:14:18.787894  310656 system_pods.go:89] "kindnet-mjsdt" [14f8bcad-0ebe-4e9d-b10f-0d01c576a5f2] Running
	I0110 09:14:18.787900  310656 system_pods.go:89] "kube-apiserver-addons-502860" [8b9124c1-4075-4ed4-9d35-3d09baaeb498] Running
	I0110 09:14:18.787909  310656 system_pods.go:89] "kube-controller-manager-addons-502860" [6dc43c39-17ad-4945-abb9-65cc13185577] Running
	I0110 09:14:18.787917  310656 system_pods.go:89] "kube-ingress-dns-minikube" [cb8c7912-b63e-40b1-8f28-fa210743a3a3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 09:14:18.787926  310656 system_pods.go:89] "kube-proxy-kqg8f" [d4b3e821-b2ca-4b08-abc4-4c02b3b0f7ad] Running
	I0110 09:14:18.787931  310656 system_pods.go:89] "kube-scheduler-addons-502860" [be32a27b-cdc7-4e93-b338-bf9b6f6874cd] Running
	I0110 09:14:18.787939  310656 system_pods.go:89] "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 09:14:18.787949  310656 system_pods.go:89] "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 09:14:18.787957  310656 system_pods.go:89] "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 09:14:18.787969  310656 system_pods.go:89] "registry-creds-567fb78d95-j77tl" [a0e51532-983e-47d2-ad4e-da6c02f070ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 09:14:18.787979  310656 system_pods.go:89] "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 09:14:18.787988  310656 system_pods.go:89] "snapshot-controller-6588d87457-8gqxp" [7960753b-c675-43d3-bd09-935f5548adf5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:18.787998  310656 system_pods.go:89] "snapshot-controller-6588d87457-gdmgr" [d05a8993-b3a0-454a-a998-bfdb5fd5e7ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:18.788007  310656 system_pods.go:89] "storage-provisioner" [9257aa4e-ce2c-45d7-83a6-726343c3898a] Running
	I0110 09:14:18.868666  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:19.083664  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:19.179454  310656 system_pods.go:86] 19 kube-system pods found
	I0110 09:14:19.179505  310656 system_pods.go:89] "coredns-7d764666f9-ldt9g" [084339d5-8bac-4c1e-8f7a-aca06ce0459e] Running
	I0110 09:14:19.179517  310656 system_pods.go:89] "csi-hostpath-attacher-0" [85991290-9605-4f0c-ac87-6c0ab46ebdc1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 09:14:19.179525  310656 system_pods.go:89] "csi-hostpath-resizer-0" [a0c1f4c0-6082-49dd-a620-f43f004badfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 09:14:19.179532  310656 system_pods.go:89] "csi-hostpathplugin-cxkkx" [9a7fdae4-42a2-4048-b570-267c1d1ea151] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 09:14:19.179538  310656 system_pods.go:89] "etcd-addons-502860" [d12eaac2-1664-45e0-b86b-de568dc7f737] Running
	I0110 09:14:19.179543  310656 system_pods.go:89] "kindnet-mjsdt" [14f8bcad-0ebe-4e9d-b10f-0d01c576a5f2] Running
	I0110 09:14:19.179549  310656 system_pods.go:89] "kube-apiserver-addons-502860" [8b9124c1-4075-4ed4-9d35-3d09baaeb498] Running
	I0110 09:14:19.179558  310656 system_pods.go:89] "kube-controller-manager-addons-502860" [6dc43c39-17ad-4945-abb9-65cc13185577] Running
	I0110 09:14:19.179565  310656 system_pods.go:89] "kube-ingress-dns-minikube" [cb8c7912-b63e-40b1-8f28-fa210743a3a3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 09:14:19.179580  310656 system_pods.go:89] "kube-proxy-kqg8f" [d4b3e821-b2ca-4b08-abc4-4c02b3b0f7ad] Running
	I0110 09:14:19.179585  310656 system_pods.go:89] "kube-scheduler-addons-502860" [be32a27b-cdc7-4e93-b338-bf9b6f6874cd] Running
	I0110 09:14:19.179592  310656 system_pods.go:89] "metrics-server-5778bb4788-4v9dz" [c45c3fc8-40f0-4cc9-911a-8d9f4dd14ac1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 09:14:19.179603  310656 system_pods.go:89] "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 09:14:19.179613  310656 system_pods.go:89] "registry-788cd7d5bc-7m2mc" [ca5b57f5-f785-4af1-9f36-3adbaea3fd71] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 09:14:19.179623  310656 system_pods.go:89] "registry-creds-567fb78d95-j77tl" [a0e51532-983e-47d2-ad4e-da6c02f070ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 09:14:19.179630  310656 system_pods.go:89] "registry-proxy-gzwbd" [7cd96d4c-f796-4c02-b744-e2ed1d51cb2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 09:14:19.179637  310656 system_pods.go:89] "snapshot-controller-6588d87457-8gqxp" [7960753b-c675-43d3-bd09-935f5548adf5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:19.179652  310656 system_pods.go:89] "snapshot-controller-6588d87457-gdmgr" [d05a8993-b3a0-454a-a998-bfdb5fd5e7ac] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 09:14:19.179659  310656 system_pods.go:89] "storage-provisioner" [9257aa4e-ce2c-45d7-83a6-726343c3898a] Running
	I0110 09:14:19.179668  310656 system_pods.go:126] duration metric: took 1.442422896s to wait for k8s-apps to be running ...
	I0110 09:14:19.179681  310656 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 09:14:19.179736  310656 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:14:19.193997  310656 system_svc.go:56] duration metric: took 14.303568ms WaitForService to wait for kubelet
	I0110 09:14:19.194029  310656 kubeadm.go:587] duration metric: took 16.691914924s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 09:14:19.194047  310656 node_conditions.go:102] verifying NodePressure condition ...
	I0110 09:14:19.197161  310656 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 09:14:19.197196  310656 node_conditions.go:123] node cpu capacity is 2
	I0110 09:14:19.197210  310656 node_conditions.go:105] duration metric: took 3.158135ms to run NodePressure ...
	I0110 09:14:19.197223  310656 start.go:242] waiting for startup goroutines ...
	I0110 09:14:19.282560  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:19.282658  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:19.368982  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:19.566344  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:19.782817  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:19.783012  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:19.869583  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:20.077245  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:20.282361  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:20.283080  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:20.369113  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:20.566909  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:20.781615  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:20.781857  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:20.881584  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:21.067148  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:21.281842  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:21.282462  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:21.382443  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:21.566717  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:21.780835  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:21.781130  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:21.868886  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:22.066554  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:22.283038  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:22.283209  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:22.369662  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:22.567865  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:22.781714  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:22.783663  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:22.869358  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:23.067146  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:23.287915  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:23.289100  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:23.369091  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:23.567371  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:23.781753  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:23.781974  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:23.869093  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:24.066466  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:24.282225  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:24.282669  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:24.368642  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:24.567430  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:24.782230  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:24.782889  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:24.869002  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:25.066986  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:25.281341  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:25.281648  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:25.380994  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:25.566658  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:25.785523  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:25.785976  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:25.869716  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:26.067558  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:26.281744  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:26.282611  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:26.368859  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:26.566561  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:26.782934  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:26.783089  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:26.869095  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:27.066733  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:27.282671  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:27.282802  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:27.368917  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:27.566651  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:27.782539  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:27.782956  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:27.869238  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:28.066480  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:28.283822  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:28.284211  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:28.369523  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:28.569381  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:28.783917  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:28.784431  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:28.868528  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:29.066797  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:29.281808  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:29.281936  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:29.368897  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:29.567453  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:29.782757  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:29.783147  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:29.869433  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:30.083281  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:30.283110  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:30.283171  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:30.369365  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:30.567582  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:30.782213  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:30.782305  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:30.869325  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:31.067521  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:31.282606  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:31.283003  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:31.369602  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:31.567661  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:31.783498  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:31.783595  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:31.869366  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:32.067104  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:32.283395  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:32.283542  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:32.482661  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:32.566766  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:32.780549  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:32.781249  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:32.868555  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:33.067095  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:33.283603  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:33.284128  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:33.369287  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:33.567565  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:33.782752  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:33.783103  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:33.869911  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:34.066753  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:34.283352  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:34.283720  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:34.369163  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:34.570706  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:34.782902  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:34.783124  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:34.868909  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:35.068154  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:35.283606  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:35.283971  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:35.369120  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:35.566726  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:35.781511  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:35.781656  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:35.881997  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:36.066549  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:36.284443  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:36.284640  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:36.368593  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:36.567790  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:36.781339  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:36.781608  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:36.868535  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:37.067196  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:37.283550  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:37.283791  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:37.368958  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:37.566281  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:37.782043  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:37.782478  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:37.868715  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:38.067428  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:38.283642  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:38.283497  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:38.383862  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:38.568078  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:38.782507  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:38.782682  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:38.868660  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:39.068296  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:39.281470  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:39.281636  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:39.368436  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:39.566410  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:39.781850  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:39.782981  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:39.869494  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:40.067712  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:40.282306  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:40.282596  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:40.381925  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:40.566208  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:40.781758  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:40.781934  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:40.869730  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:41.066486  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:41.281137  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:41.281293  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:41.369972  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:41.566523  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:41.781504  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:41.781654  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:41.868468  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:42.067528  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:42.283178  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:42.283797  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:42.371038  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:42.566542  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:42.782264  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:42.782585  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:42.869007  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:43.066646  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:43.287075  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:43.289082  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:43.388639  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:43.567570  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:43.781185  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:43.782439  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:43.868572  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:44.067577  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:44.281385  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:44.282039  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:44.370014  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:44.566390  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:44.782137  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:44.782546  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:44.869333  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:45.067640  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:45.292221  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:45.293125  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:45.370671  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:45.567223  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:45.781938  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:45.782403  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:45.868292  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:46.068625  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:46.281389  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:46.281940  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:46.381776  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:46.566415  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:46.781962  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:46.782725  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:46.869027  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:47.066312  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:47.282904  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:47.283023  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:47.369085  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:47.570974  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:47.781909  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:47.782106  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:47.869281  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:48.066688  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:48.282877  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:48.283410  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:48.370527  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:48.568053  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:48.782811  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:48.784098  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:48.869686  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:49.067554  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:49.281990  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:49.282311  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:49.371258  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:49.571690  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:49.782307  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:49.783751  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:49.868137  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:50.067469  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:50.282335  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:50.282551  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:50.383936  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:50.566841  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:50.783211  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:50.783612  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:50.883470  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:51.069178  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:51.286177  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:51.286390  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:51.369076  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:51.567234  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:51.783147  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:51.783534  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:51.869552  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:52.067689  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:52.291721  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:52.292074  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:52.369591  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:52.567433  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:52.781379  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:52.782382  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:52.868715  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:53.066863  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:53.282500  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:53.282603  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:53.369027  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:53.566321  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:53.782381  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:53.782556  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:53.868580  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:54.066624  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:54.281220  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:54.281357  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:54.369225  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:54.567105  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:54.782970  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:54.783066  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 09:14:54.868480  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:55.066622  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:55.284275  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:55.284512  310656 kapi.go:107] duration metric: took 45.506739639s to wait for kubernetes.io/minikube-addons=registry ...
	I0110 09:14:55.385037  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:55.566375  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:55.781080  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:55.869858  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:56.067924  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:56.281992  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:56.369017  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:56.568326  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:56.781406  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:56.869342  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:57.066509  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:57.281408  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:57.369151  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:57.588524  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:57.781877  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:57.869562  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:58.068004  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:58.282160  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:58.369791  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:58.569329  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:58.781512  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:58.869540  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:59.067269  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:59.281747  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:59.368986  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:14:59.567353  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:14:59.780465  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:14:59.868480  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:00.115826  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:00.303159  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:00.376967  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:00.574340  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:00.786324  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:00.869535  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:01.068753  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:01.281308  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:01.369918  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:01.566101  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:01.782011  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:01.870746  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:02.068520  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:02.282364  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:02.371258  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:02.567172  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:02.783972  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:02.868848  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:03.066638  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:03.283610  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:03.368916  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:03.566589  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:03.826456  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:03.871093  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:04.066411  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:04.281787  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:04.368903  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:04.570146  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:04.781606  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:04.868798  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:05.068035  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:05.282537  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:05.369810  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:05.575077  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:05.782197  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:05.870105  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:06.067379  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:06.281319  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:06.368720  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:06.567976  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:06.781982  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:06.869233  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:07.066958  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:07.281496  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:07.368389  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:07.566953  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:07.781800  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:07.869443  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:08.067336  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:08.281171  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:08.369521  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:08.568389  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:08.780346  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:08.868552  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:09.067461  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:09.281468  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:09.368752  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:09.567690  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:09.782416  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:09.868484  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:10.069432  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:10.281873  310656 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 09:15:10.382298  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:10.577143  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:10.781399  310656 kapi.go:107] duration metric: took 1m1.003868438s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0110 09:15:10.868260  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:11.066835  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:11.368720  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:11.567082  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:11.870182  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:12.067926  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:12.369236  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:12.567185  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:12.868907  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:13.066784  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:13.369587  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 09:15:13.567014  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:13.870046  310656 kapi.go:107] duration metric: took 1m0.504671639s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0110 09:15:13.873921  310656 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-502860 cluster.
	I0110 09:15:13.876649  310656 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0110 09:15:13.879433  310656 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0110 09:15:14.066447  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:14.567066  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:15.067977  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:15.567294  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:16.067466  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:16.567013  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:17.066811  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:17.566779  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:18.071549  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:18.566818  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:19.066602  310656 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 09:15:19.567271  310656 kapi.go:107] duration metric: took 1m9.504316703s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0110 09:15:19.570383  310656 out.go:179] * Enabled addons: nvidia-device-plugin, registry-creds, default-storageclass, ingress-dns, inspektor-gadget, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0110 09:15:19.573285  310656 addons.go:530] duration metric: took 1m17.070780943s for enable addons: enabled=[nvidia-device-plugin registry-creds default-storageclass ingress-dns inspektor-gadget amd-gpu-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0110 09:15:19.573335  310656 start.go:247] waiting for cluster config update ...
	I0110 09:15:19.573359  310656 start.go:256] writing updated cluster config ...
	I0110 09:15:19.573651  310656 ssh_runner.go:195] Run: rm -f paused
	I0110 09:15:19.578328  310656 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 09:15:19.581683  310656 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ldt9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.587253  310656 pod_ready.go:94] pod "coredns-7d764666f9-ldt9g" is "Ready"
	I0110 09:15:19.587281  310656 pod_ready.go:86] duration metric: took 5.570997ms for pod "coredns-7d764666f9-ldt9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.590393  310656 pod_ready.go:83] waiting for pod "etcd-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.595258  310656 pod_ready.go:94] pod "etcd-addons-502860" is "Ready"
	I0110 09:15:19.595282  310656 pod_ready.go:86] duration metric: took 4.864494ms for pod "etcd-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.597782  310656 pod_ready.go:83] waiting for pod "kube-apiserver-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.603039  310656 pod_ready.go:94] pod "kube-apiserver-addons-502860" is "Ready"
	I0110 09:15:19.603067  310656 pod_ready.go:86] duration metric: took 5.257713ms for pod "kube-apiserver-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.606018  310656 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:19.983064  310656 pod_ready.go:94] pod "kube-controller-manager-addons-502860" is "Ready"
	I0110 09:15:19.983145  310656 pod_ready.go:86] duration metric: took 377.101976ms for pod "kube-controller-manager-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:20.182747  310656 pod_ready.go:83] waiting for pod "kube-proxy-kqg8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:20.582214  310656 pod_ready.go:94] pod "kube-proxy-kqg8f" is "Ready"
	I0110 09:15:20.582244  310656 pod_ready.go:86] duration metric: took 399.466577ms for pod "kube-proxy-kqg8f" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:20.782494  310656 pod_ready.go:83] waiting for pod "kube-scheduler-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:21.182631  310656 pod_ready.go:94] pod "kube-scheduler-addons-502860" is "Ready"
	I0110 09:15:21.182661  310656 pod_ready.go:86] duration metric: took 400.136614ms for pod "kube-scheduler-addons-502860" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:15:21.182676  310656 pod_ready.go:40] duration metric: took 1.604306541s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 09:15:21.236510  310656 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 09:15:21.239715  310656 out.go:203] 
	W0110 09:15:21.242769  310656 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 09:15:21.245673  310656 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 09:15:21.248550  310656 out.go:179] * Done! kubectl is now configured to use "addons-502860" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 09:15:19 addons-502860 crio[826]: time="2026-01-10T09:15:19.008756946Z" level=info msg="Created container 2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1: kube-system/csi-hostpathplugin-cxkkx/csi-snapshotter" id=419a4f2b-7072-4856-b5a2-7b9e41acf652 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:15:19 addons-502860 crio[826]: time="2026-01-10T09:15:19.011411255Z" level=info msg="Starting container: 2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1" id=016cf960-2b4c-4916-8dde-3fc2403c07e9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:15:19 addons-502860 crio[826]: time="2026-01-10T09:15:19.015533537Z" level=info msg="Started container" PID=5014 containerID=2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1 description=kube-system/csi-hostpathplugin-cxkkx/csi-snapshotter id=016cf960-2b4c-4916-8dde-3fc2403c07e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6f3358e79639513477c31d63af0f4292dc833c78ee9315701f96ee2e834d9b6e
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.69972989Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9ba1a513-b63f-4eb8-9050-a44c8dfa9129 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.699859639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.70663161Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e0c3513696c5de882cf5446ca291cf0b902d16c7985bcb703597d7426bb58f55 UID:b815314f-ebfe-4b8e-b8b5-a700cc12f829 NetNS:/var/run/netns/fb5dff77-cb5a-4551-9a3b-4f3665666228 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400011f328}] Aliases:map[]}"
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.70692548Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.724811529Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e0c3513696c5de882cf5446ca291cf0b902d16c7985bcb703597d7426bb58f55 UID:b815314f-ebfe-4b8e-b8b5-a700cc12f829 NetNS:/var/run/netns/fb5dff77-cb5a-4551-9a3b-4f3665666228 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400011f328}] Aliases:map[]}"
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.724969685Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.727771409Z" level=info msg="Ran pod sandbox e0c3513696c5de882cf5446ca291cf0b902d16c7985bcb703597d7426bb58f55 with infra container: default/busybox/POD" id=9ba1a513-b63f-4eb8-9050-a44c8dfa9129 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.729292568Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c3b34715-17df-4c12-b5b1-ee238194733f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.729412356Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c3b34715-17df-4c12-b5b1-ee238194733f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.729488747Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c3b34715-17df-4c12-b5b1-ee238194733f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.733146949Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=47ef92c6-6588-418c-a5ae-eaca57c34e7e name=/runtime.v1.ImageService/PullImage
	Jan 10 09:15:22 addons-502860 crio[826]: time="2026-01-10T09:15:22.733484553Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.699498017Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=47ef92c6-6588-418c-a5ae-eaca57c34e7e name=/runtime.v1.ImageService/PullImage
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.700127251Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2cf41e0f-7133-4df9-8da6-b9a21578a16d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.702196774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f02a3f9-153c-4ced-b9bd-74c84c62a7c2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.708109797Z" level=info msg="Creating container: default/busybox/busybox" id=9383c304-6909-4081-a59d-90d2388ef875 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.708269749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.716104051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.716855149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.734718839Z" level=info msg="Created container e0122985c2f8ebf4de882378c6d7cf98615e30b7e5d964211a0971bfa4bf1e94: default/busybox/busybox" id=9383c304-6909-4081-a59d-90d2388ef875 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.735464104Z" level=info msg="Starting container: e0122985c2f8ebf4de882378c6d7cf98615e30b7e5d964211a0971bfa4bf1e94" id=4f9db428-553c-48ef-89b9-d3f52392db66 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:15:24 addons-502860 crio[826]: time="2026-01-10T09:15:24.739605782Z" level=info msg="Started container" PID=5108 containerID=e0122985c2f8ebf4de882378c6d7cf98615e30b7e5d964211a0971bfa4bf1e94 description=default/busybox/busybox id=4f9db428-553c-48ef-89b9-d3f52392db66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0c3513696c5de882cf5446ca291cf0b902d16c7985bcb703597d7426bb58f55
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e0122985c2f8e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   e0c3513696c5d       busybox                                     default
	2977a479b90a9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   6f3358e796395       csi-hostpathplugin-cxkkx                    kube-system
	f9d31a1b0f033       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          17 seconds ago       Running             csi-provisioner                          0                   6f3358e796395       csi-hostpathplugin-cxkkx                    kube-system
	0cb280c9473cd       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   6f3358e796395       csi-hostpathplugin-cxkkx                    kube-system
	b9bf931eff2af       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   6f3358e796395       csi-hostpathplugin-cxkkx                    kube-system
	ee8f645ef693d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 21 seconds ago       Running             gcp-auth                                 0                   6287ed2c5e345       gcp-auth-5bbcf684b5-z85r4                   gcp-auth
	4f5d2c9f7b310       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             24 seconds ago       Running             controller                               0                   f6c7421f36ea8       ingress-nginx-controller-7847b5c79c-4xqmm   ingress-nginx
	e4a16a06b2474       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                31 seconds ago       Running             node-driver-registrar                    0                   6f3358e796395       csi-hostpathplugin-cxkkx                    kube-system
	2b40b8ab9782f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   32 seconds ago       Exited              patch                                    1                   25459fd15ab30       gcp-auth-certs-patch-5gh49                  gcp-auth
	dea1194f84cee       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:d72bd468a5addb0c00bee32b564fe51e54a7e83195da28701dc4e8e1e019ae08                            33 seconds ago       Running             gadget                                   0                   4fed887f3be11       gadget-jswfc                                gadget
	5b00371ac8754       ghcr.io/manusa/yakd@sha256:68bfcea671292190cdd2b127455726ac24794d1f7c55ce74c33d4648a3a0f50b                                                  37 seconds ago       Running             yakd                                     0                   11127c41b8d81       yakd-dashboard-7bcf5795cd-n9mns             yakd-dashboard
	c0dbd7b37ff83       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              40 seconds ago       Running             registry-proxy                           0                   499fed1145143       registry-proxy-gzwbd                        kube-system
	fab3511991779       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   43 seconds ago       Exited              patch                                    0                   12bd5163aaaeb       ingress-nginx-admission-patch-597sj         ingress-nginx
	b18c47b6c8b09       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        44 seconds ago       Running             metrics-server                           0                   5fbf0ce2c7f1b       metrics-server-5778bb4788-4v9dz             kube-system
	ca6df7d28611c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   47 seconds ago       Running             csi-external-health-monitor-controller   0                   6f3358e796395       csi-hostpathplugin-cxkkx                    kube-system
	ffd4273782778       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   48 seconds ago       Exited              create                                   0                   9f42a463720e6       gcp-auth-certs-create-drdzg                 gcp-auth
	9d7183f14934c       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     49 seconds ago       Running             nvidia-device-plugin-ctr                 0                   38b604c3bbdd7       nvidia-device-plugin-daemonset-jkcrk        kube-system
	544987e8dfa82       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   53 seconds ago       Exited              create                                   0                   2ade4780d3f4a       ingress-nginx-admission-create-q2vz8        ingress-nginx
	0cef4c47edaba       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              55 seconds ago       Running             csi-resizer                              0                   ed65ed7dffaa8       csi-hostpath-resizer-0                      kube-system
	bb4fad33aabbb       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             57 seconds ago       Running             csi-attacher                             0                   5bba7cc07d130       csi-hostpath-attacher-0                     kube-system
	8639b29498a6c       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      58 seconds ago       Running             volume-snapshot-controller               0                   10638305367c1       snapshot-controller-6588d87457-gdmgr        kube-system
	b8d7dd49e6aa3       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           59 seconds ago       Running             registry                                 0                   8d6bce10a8f50       registry-788cd7d5bc-7m2mc                   kube-system
	0f95533a115f8       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   9aac12d5754dc       kube-ingress-dns-minikube                   kube-system
	0ac6fc2d515e2       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               About a minute ago   Running             cloud-spanner-emulator                   0                   bc2e957b4bed3       cloud-spanner-emulator-5649ccbc87-rgpzx     default
	60a4257e40632       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   2a0d33c3e3787       local-path-provisioner-c44bcd496-nfc29      local-path-storage
	6bdfa3d2092e5       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   8029f0a977527       snapshot-controller-6588d87457-8gqxp        kube-system
	2bc8d72f2fafc       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             About a minute ago   Running             coredns                                  0                   5511b3f73ce7f       coredns-7d764666f9-ldt9g                    kube-system
	41bba34c9f3b7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   f7afbe1472d5e       storage-provisioner                         kube-system
	2a570db330b6a       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           About a minute ago   Running             kindnet-cni                              0                   320856bf31311       kindnet-mjsdt                               kube-system
	eafe47e69fea7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             About a minute ago   Running             kube-proxy                               0                   fedd5039e406e       kube-proxy-kqg8f                            kube-system
	d9ac16bc6b5ab       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             About a minute ago   Running             kube-scheduler                           0                   29b6460dd0ab9       kube-scheduler-addons-502860                kube-system
	22b7d7f5bce6b       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             About a minute ago   Running             kube-apiserver                           0                   df955f0983435       kube-apiserver-addons-502860                kube-system
	508f986c7bd7f       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             About a minute ago   Running             kube-controller-manager                  0                   3e462843246a1       kube-controller-manager-addons-502860       kube-system
	855037fdb4e98       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             About a minute ago   Running             etcd                                     0                   d59b0738e2d16       etcd-addons-502860                          kube-system
	
	
	==> coredns [2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84] <==
	[INFO] 10.244.0.14:49788 - 10602 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000118295s
	[INFO] 10.244.0.14:49788 - 58564 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001999179s
	[INFO] 10.244.0.14:49788 - 4935 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002984488s
	[INFO] 10.244.0.14:49788 - 19523 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000254403s
	[INFO] 10.244.0.14:49788 - 33523 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000198551s
	[INFO] 10.244.0.14:49070 - 6168 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00025396s
	[INFO] 10.244.0.14:49070 - 5970 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000393679s
	[INFO] 10.244.0.14:39874 - 53809 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111058s
	[INFO] 10.244.0.14:39874 - 53637 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000138488s
	[INFO] 10.244.0.14:47284 - 1131 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104921s
	[INFO] 10.244.0.14:47284 - 959 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146701s
	[INFO] 10.244.0.14:33317 - 42881 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001109897s
	[INFO] 10.244.0.14:33317 - 42684 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001245234s
	[INFO] 10.244.0.14:50898 - 22634 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139997s
	[INFO] 10.244.0.14:50898 - 22478 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000084284s
	[INFO] 10.244.0.20:55612 - 53201 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183445s
	[INFO] 10.244.0.20:48368 - 12071 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108383s
	[INFO] 10.244.0.20:60077 - 50005 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133385s
	[INFO] 10.244.0.20:58647 - 29072 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121454s
	[INFO] 10.244.0.20:50305 - 62034 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093055s
	[INFO] 10.244.0.20:50534 - 20704 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00006999s
	[INFO] 10.244.0.20:46216 - 2152 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002688647s
	[INFO] 10.244.0.20:47282 - 32581 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002428871s
	[INFO] 10.244.0.20:38055 - 12207 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000818258s
	[INFO] 10.244.0.20:47414 - 35386 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000691479s
	
	
	==> describe nodes <==
	Name:               addons-502860
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-502860
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=addons-502860
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T09_13_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-502860
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-502860"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 09:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-502860
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 09:15:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 09:15:29 +0000   Sat, 10 Jan 2026 09:13:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 09:15:29 +0000   Sat, 10 Jan 2026 09:13:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 09:15:29 +0000   Sat, 10 Jan 2026 09:13:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 09:15:29 +0000   Sat, 10 Jan 2026 09:14:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-502860
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                e6034391-9cf6-4cf0-8346-ec0a67626fb4
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5649ccbc87-rgpzx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  gadget                      gadget-jswfc                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  gcp-auth                    gcp-auth-5bbcf684b5-z85r4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-4xqmm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         85s
	  kube-system                 coredns-7d764666f9-ldt9g                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     91s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 csi-hostpathplugin-cxkkx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 etcd-addons-502860                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         97s
	  kube-system                 kindnet-mjsdt                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-addons-502860                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-addons-502860        200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-kqg8f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-addons-502860                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 metrics-server-5778bb4788-4v9dz              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         86s
	  kube-system                 nvidia-device-plugin-daemonset-jkcrk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 registry-788cd7d5bc-7m2mc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 registry-creds-567fb78d95-j77tl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-gzwbd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 snapshot-controller-6588d87457-8gqxp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 snapshot-controller-6588d87457-gdmgr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  local-path-storage          local-path-provisioner-c44bcd496-nfc29       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-n9mns              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  93s   node-controller  Node addons-502860 event: Registered Node addons-502860 in Controller
	
	
	==> dmesg <==
	[Jan10 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014404] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501404] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033858] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.756390] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.572614] kauditd_printk_skb: 36 callbacks suppressed
	[Jan10 08:12] hrtimer: interrupt took 28035078 ns
	[Jan10 09:12] kauditd_printk_skb: 8 callbacks suppressed
	[Jan10 09:13] overlayfs: idmapped layers are currently not supported
	[  +0.075266] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3] <==
	{"level":"info","ts":"2026-01-10T09:13:52.880646Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T09:13:52.880722Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2026-01-10T09:13:52.880803Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T09:13:52.880843Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T09:13:52.884537Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-10T09:13:52.884632Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T09:13:52.884672Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2026-01-10T09:13:52.884706Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-10T09:13:52.888658Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-502860 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T09:13:52.888744Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T09:13:52.888799Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:13:52.890837Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T09:13:52.888811Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T09:13:52.888924Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T09:13:52.904739Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T09:13:52.905447Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T09:13:52.917801Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2026-01-10T09:13:52.907247Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T09:13:52.912618Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:13:52.923773Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:13:52.923862Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:13:52.954284Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T09:13:52.960997Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"warn","ts":"2026-01-10T09:14:32.481589Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.597619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T09:14:32.481660Z","caller":"traceutil/trace.go:172","msg":"trace[1950616721] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:977; }","duration":"113.682954ms","start":"2026-01-10T09:14:32.367966Z","end":"2026-01-10T09:14:32.481649Z","steps":["trace[1950616721] 'range keys from in-memory index tree'  (duration: 113.537434ms)"],"step_count":1}
	
	
	==> gcp-auth [ee8f645ef693d642a871a4d8532df364933cc289b2c8a6963f223b84a82a2681] <==
	2026/01/10 09:15:13 GCP Auth Webhook started!
	2026/01/10 09:15:22 Ready to marshal response ...
	2026/01/10 09:15:22 Ready to write response ...
	2026/01/10 09:15:22 Ready to marshal response ...
	2026/01/10 09:15:22 Ready to write response ...
	2026/01/10 09:15:22 Ready to marshal response ...
	2026/01/10 09:15:22 Ready to write response ...
	
	
	==> kernel <==
	 09:15:35 up  1:58,  0 user,  load average: 2.58, 2.54, 2.46
	Linux addons-502860 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041] <==
	I0110 09:14:06.921947       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T09:14:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 09:14:07.150772       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 09:14:07.150789       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 09:14:07.150798       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 09:14:07.150899       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 09:14:07.420777       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 09:14:07.420809       1 metrics.go:72] Registering metrics
	I0110 09:14:07.420866       1 controller.go:711] "Syncing nftables rules"
	I0110 09:14:17.151003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:14:17.151099       1 main.go:301] handling current node
	I0110 09:14:27.150734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:14:27.150824       1 main.go:301] handling current node
	I0110 09:14:37.151470       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:14:37.151515       1 main.go:301] handling current node
	I0110 09:14:47.152570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:14:47.152607       1 main.go:301] handling current node
	I0110 09:14:57.150764       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:14:57.150793       1 main.go:301] handling current node
	I0110 09:15:07.151193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:15:07.151222       1 main.go:301] handling current node
	I0110 09:15:17.150990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:15:17.151024       1 main.go:301] handling current node
	I0110 09:15:27.150896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 09:15:27.150942       1 main.go:301] handling current node
	
	
	==> kube-apiserver [22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94] <==
	W0110 09:14:19.992957       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E0110 09:14:52.540096       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.121.147:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.121.147:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.121.147:443: connect: connection refused" logger="UnhandledError"
	W0110 09:14:52.540470       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 09:14:52.541775       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0110 09:14:53.542110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 09:14:53.542164       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0110 09:14:53.542177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0110 09:14:53.542110       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 09:14:53.542249       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0110 09:14:53.543245       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0110 09:14:54.969990       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0110 09:14:57.548251       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.121.147:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.121.147:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	W0110 09:14:57.548357       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 09:14:57.548397       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0110 09:15:32.867860       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46392: use of closed network connection
	E0110 09:15:32.992528       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46420: use of closed network connection
	
	
	==> kube-controller-manager [508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a] <==
	I0110 09:14:01.640733       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.640782       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.641146       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.641204       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.641225       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.641253       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.644360       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:14:01.647040       1 range_allocator.go:433] "Set node PodCIDR" node="addons-502860" podCIDRs=["10.244.0.0/24"]
	I0110 09:14:01.647094       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.647210       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.648655       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.649381       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.738653       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:01.738676       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 09:14:01.738682       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 09:14:01.745536       1 shared_informer.go:377] "Caches are synced"
	E0110 09:14:08.476299       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I0110 09:14:21.641824       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E0110 09:14:31.662087       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0110 09:14:31.662251       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0110 09:14:31.662307       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:14:31.754819       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0110 09:14:31.761627       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:14:31.762565       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:31.862106       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382] <==
	I0110 09:14:04.117978       1 server_linux.go:53] "Using iptables proxy"
	I0110 09:14:04.246626       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:14:04.350407       1 shared_informer.go:377] "Caches are synced"
	I0110 09:14:04.350474       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0110 09:14:04.350571       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 09:14:04.419781       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 09:14:04.419836       1 server_linux.go:136] "Using iptables Proxier"
	I0110 09:14:04.450915       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 09:14:04.451273       1 server.go:529] "Version info" version="v1.35.0"
	I0110 09:14:04.451300       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:14:04.452879       1 config.go:200] "Starting service config controller"
	I0110 09:14:04.452895       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 09:14:04.452911       1 config.go:106] "Starting endpoint slice config controller"
	I0110 09:14:04.452916       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 09:14:04.452934       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 09:14:04.452942       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 09:14:04.453573       1 config.go:309] "Starting node config controller"
	I0110 09:14:04.453581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 09:14:04.482261       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 09:14:04.562803       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 09:14:04.562848       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 09:14:04.562874       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a] <==
	E0110 09:13:54.951857       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 09:13:54.951904       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 09:13:54.951945       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 09:13:54.952024       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 09:13:54.952069       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 09:13:54.952109       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 09:13:54.952166       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 09:13:54.952226       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 09:13:54.952320       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 09:13:54.952378       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 09:13:54.956426       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 09:13:55.774093       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 09:13:55.791411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 09:13:55.843481       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 09:13:55.857815       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 09:13:55.860332       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 09:13:55.886969       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 09:13:55.922864       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 09:13:55.940172       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 09:13:55.953045       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 09:13:55.954912       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 09:13:55.983561       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 09:13:56.083991       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 09:13:56.105323       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I0110 09:13:58.808784       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 09:15:04 addons-502860 kubelet[1244]: I0110 09:15:04.388480    1244 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25459fd15ab309024b457e1237611ef960302d6c9aa7905279f9f45e48554397"
	Jan 10 09:15:04 addons-502860 kubelet[1244]: E0110 09:15:04.975238    1244 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-jswfc" containerName="gadget"
	Jan 10 09:15:05 addons-502860 kubelet[1244]: E0110 09:15:05.395922    1244 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-jswfc" containerName="gadget"
	Jan 10 09:15:06 addons-502860 kubelet[1244]: E0110 09:15:06.399361    1244 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-jswfc" containerName="gadget"
	Jan 10 09:15:10 addons-502860 kubelet[1244]: E0110 09:15:10.312795    1244 prober_manager.go:209] "Readiness probe already exists for container" pod="yakd-dashboard/yakd-dashboard-7bcf5795cd-n9mns" containerName="yakd"
	Jan 10 09:15:10 addons-502860 kubelet[1244]: E0110 09:15:10.412780    1244 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-4xqmm" containerName="controller"
	Jan 10 09:15:11 addons-502860 kubelet[1244]: E0110 09:15:11.415280    1244 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-4xqmm" containerName="controller"
	Jan 10 09:15:12 addons-502860 kubelet[1244]: E0110 09:15:12.419012    1244 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-4xqmm" containerName="controller"
	Jan 10 09:15:13 addons-502860 kubelet[1244]: I0110 09:15:13.446479    1244 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-z85r4" podStartSLOduration=37.175991043 podStartE2EDuration="1m0.446444116s" podCreationTimestamp="2026-01-10 09:14:13 +0000 UTC" firstStartedPulling="2026-01-10 09:14:50.070074416 +0000 UTC m=+52.538166647" lastFinishedPulling="2026-01-10 09:15:13.340527407 +0000 UTC m=+75.808619720" observedRunningTime="2026-01-10 09:15:13.445885299 +0000 UTC m=+75.913977554" watchObservedRunningTime="2026-01-10 09:15:13.446444116 +0000 UTC m=+75.914536355"
	Jan 10 09:15:13 addons-502860 kubelet[1244]: I0110 09:15:13.447492    1244 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-4xqmm" podStartSLOduration=44.227172045 podStartE2EDuration="1m4.447478147s" podCreationTimestamp="2026-01-10 09:14:09 +0000 UTC" firstStartedPulling="2026-01-10 09:14:50.021848649 +0000 UTC m=+52.489940888" lastFinishedPulling="2026-01-10 09:15:10.242154751 +0000 UTC m=+72.710246990" observedRunningTime="2026-01-10 09:15:10.445020858 +0000 UTC m=+72.913113089" watchObservedRunningTime="2026-01-10 09:15:13.447478147 +0000 UTC m=+75.915570386"
	Jan 10 09:15:14 addons-502860 kubelet[1244]: E0110 09:15:14.672997    1244 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-addons-502860" containerName="kube-scheduler"
	Jan 10 09:15:15 addons-502860 kubelet[1244]: I0110 09:15:15.855599    1244 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Jan 10 09:15:15 addons-502860 kubelet[1244]: I0110 09:15:15.855648    1244 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Jan 10 09:15:19 addons-502860 kubelet[1244]: E0110 09:15:19.467361    1244 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-cxkkx" containerName="hostpath"
	Jan 10 09:15:19 addons-502860 kubelet[1244]: I0110 09:15:19.489340    1244 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-cxkkx" podStartSLOduration=2.046620919 podStartE2EDuration="1m2.489322996s" podCreationTimestamp="2026-01-10 09:14:17 +0000 UTC" firstStartedPulling="2026-01-10 09:14:18.528245617 +0000 UTC m=+20.996337848" lastFinishedPulling="2026-01-10 09:15:18.970947694 +0000 UTC m=+81.439039925" observedRunningTime="2026-01-10 09:15:19.485777854 +0000 UTC m=+81.953870093" watchObservedRunningTime="2026-01-10 09:15:19.489322996 +0000 UTC m=+81.957415227"
	Jan 10 09:15:19 addons-502860 kubelet[1244]: I0110 09:15:19.675775    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec3d42e1-a1cb-43fc-ab1c-3474f3a1543f" path="/var/lib/kubelet/pods/ec3d42e1-a1cb-43fc-ab1c-3474f3a1543f/volumes"
	Jan 10 09:15:20 addons-502860 kubelet[1244]: E0110 09:15:20.471110    1244 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-cxkkx" containerName="hostpath"
	Jan 10 09:15:21 addons-502860 kubelet[1244]: E0110 09:15:21.545462    1244 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jan 10 09:15:21 addons-502860 kubelet[1244]: E0110 09:15:21.545563    1244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e51532-983e-47d2-ad4e-da6c02f070ab-gcr-creds podName:a0e51532-983e-47d2-ad4e-da6c02f070ab nodeName:}" failed. No retries permitted until 2026-01-10 09:16:25.545546419 +0000 UTC m=+148.013638650 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a0e51532-983e-47d2-ad4e-da6c02f070ab-gcr-creds") pod "registry-creds-567fb78d95-j77tl" (UID: "a0e51532-983e-47d2-ad4e-da6c02f070ab") : secret "registry-creds-gcr" not found
	Jan 10 09:15:22 addons-502860 kubelet[1244]: E0110 09:15:22.422128    1244 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-4xqmm" containerName="controller"
	Jan 10 09:15:22 addons-502860 kubelet[1244]: I0110 09:15:22.552984    1244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b815314f-ebfe-4b8e-b8b5-a700cc12f829-gcp-creds\") pod \"busybox\" (UID: \"b815314f-ebfe-4b8e-b8b5-a700cc12f829\") " pod="default/busybox"
	Jan 10 09:15:22 addons-502860 kubelet[1244]: I0110 09:15:22.553040    1244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzdc\" (UniqueName: \"kubernetes.io/projected/b815314f-ebfe-4b8e-b8b5-a700cc12f829-kube-api-access-sgzdc\") pod \"busybox\" (UID: \"b815314f-ebfe-4b8e-b8b5-a700cc12f829\") " pod="default/busybox"
	Jan 10 09:15:23 addons-502860 kubelet[1244]: E0110 09:15:23.673491    1244 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-addons-502860" containerName="kube-apiserver"
	Jan 10 09:15:25 addons-502860 kubelet[1244]: I0110 09:15:25.508658    1244 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5376100620000002 podStartE2EDuration="3.508628616s" podCreationTimestamp="2026-01-10 09:15:22 +0000 UTC" firstStartedPulling="2026-01-10 09:15:22.730035666 +0000 UTC m=+85.198127897" lastFinishedPulling="2026-01-10 09:15:24.701054212 +0000 UTC m=+87.169146451" observedRunningTime="2026-01-10 09:15:25.507494457 +0000 UTC m=+87.975586696" watchObservedRunningTime="2026-01-10 09:15:25.508628616 +0000 UTC m=+87.976720847"
	Jan 10 09:15:30 addons-502860 kubelet[1244]: E0110 09:15:30.673626    1244 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-addons-502860" containerName="etcd"
	
	
	==> storage-provisioner [41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16] <==
	W0110 09:15:10.544772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:12.547972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:12.553239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:14.556698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:14.561710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:16.566224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:16.571588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:18.575154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:18.584121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:20.587050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:20.593907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:22.597526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:22.604403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:24.608225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:24.613600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:26.616780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:26.621089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:28.624920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:28.629316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:30.632857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:30.637939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:32.641050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:32.657347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:34.659915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 09:15:34.666951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-502860 -n addons-502860
helpers_test.go:270: (dbg) Run:  kubectl --context addons-502860 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-q2vz8 ingress-nginx-admission-patch-597sj registry-creds-567fb78d95-j77tl
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-502860 describe pod ingress-nginx-admission-create-q2vz8 ingress-nginx-admission-patch-597sj registry-creds-567fb78d95-j77tl
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-502860 describe pod ingress-nginx-admission-create-q2vz8 ingress-nginx-admission-patch-597sj registry-creds-567fb78d95-j77tl: exit status 1 (92.009376ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-q2vz8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-597sj" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-j77tl" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-502860 describe pod ingress-nginx-admission-create-q2vz8 ingress-nginx-admission-patch-597sj registry-creds-567fb78d95-j77tl: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable headlamp --alsologtostderr -v=1: exit status 11 (267.55578ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:36.233931  317396 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:36.234949  317396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:36.234965  317396 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:36.234971  317396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:36.235354  317396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:36.235684  317396 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:36.236066  317396 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:36.236083  317396 addons.go:622] checking whether the cluster is paused
	I0110 09:15:36.236188  317396 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:36.236199  317396 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:36.236767  317396 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:36.257320  317396 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:36.257382  317396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:36.278738  317396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:36.387065  317396 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:36.387204  317396 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:36.417017  317396 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:36.417049  317396 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:36.417055  317396 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:36.417060  317396 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:36.417063  317396 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:36.417068  317396 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:36.417072  317396 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:36.417080  317396 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:36.417085  317396 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:36.417091  317396 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:36.417095  317396 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:36.417104  317396 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:36.417108  317396 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:36.417111  317396 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:36.417113  317396 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:36.417118  317396 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:36.417121  317396 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:36.417131  317396 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:36.417137  317396 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:36.417142  317396 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:36.417147  317396 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:36.417152  317396 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:36.417156  317396 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:36.417161  317396 cri.go:96] found id: ""
	I0110 09:15:36.417215  317396 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:36.433136  317396 out.go:203] 
	W0110 09:15:36.436098  317396 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:36.436120  317396 out.go:285] * 
	* 
	W0110 09:15:36.439715  317396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:36.442661  317396 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-rgpzx" [b2f1451b-a738-4064-941f-c879d742453b] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003905905s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (251.73687ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:54.230155  317851 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:54.230909  317851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:54.230924  317851 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:54.230931  317851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:54.231199  317851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:54.231493  317851 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:54.231863  317851 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:54.231887  317851 addons.go:622] checking whether the cluster is paused
	I0110 09:15:54.232003  317851 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:54.232018  317851 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:54.232548  317851 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:54.249873  317851 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:54.249933  317851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:54.266313  317851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:54.371095  317851 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:54.371185  317851 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:54.400171  317851 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:54.400248  317851 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:54.400268  317851 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:54.400289  317851 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:54.400328  317851 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:54.400352  317851 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:54.400371  317851 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:54.400392  317851 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:54.400428  317851 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:54.400456  317851 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:54.400486  317851 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:54.400536  317851 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:54.400547  317851 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:54.400560  317851 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:54.400564  317851 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:54.400586  317851 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:54.400591  317851 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:54.400597  317851 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:54.400600  317851 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:54.400603  317851 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:54.400608  317851 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:54.400616  317851 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:54.400619  317851 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:54.400622  317851 cri.go:96] found id: ""
	I0110 09:15:54.400699  317851 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:54.416663  317851 out.go:203] 
	W0110 09:15:54.419652  317851 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:54.419674  317851 out.go:285] * 
	* 
	W0110 09:15:54.423420  317851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:54.426568  317851 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-502860 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-502860 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [719906c0-bba7-45de-b770-80bc22db07f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [719906c0-bba7-45de-b770-80bc22db07f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [719906c0-bba7-45de-b770-80bc22db07f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003279515s
addons_test.go:969: (dbg) Run:  kubectl --context addons-502860 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 ssh "cat /opt/local-path-provisioner/pvc-0ee4262f-523e-4dc6-ae05-f15120de2359_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-502860 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-502860 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.417962ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:58.367577  318075 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:58.368368  318075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:58.368426  318075 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:58.368447  318075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:58.369270  318075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:58.369711  318075 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:58.370411  318075 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:58.370459  318075 addons.go:622] checking whether the cluster is paused
	I0110 09:15:58.370734  318075 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:58.370765  318075 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:58.371662  318075 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:58.391797  318075 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:58.391858  318075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:58.411030  318075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:58.515652  318075 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:58.515737  318075 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:58.546211  318075 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:58.546232  318075 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:58.546237  318075 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:58.546241  318075 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:58.546244  318075 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:58.546248  318075 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:58.546251  318075 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:58.546255  318075 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:58.546258  318075 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:58.546267  318075 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:58.546271  318075 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:58.546274  318075 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:58.546278  318075 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:58.546282  318075 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:58.546285  318075 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:58.546290  318075 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:58.546294  318075 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:58.546299  318075 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:58.546302  318075 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:58.546305  318075 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:58.546318  318075 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:58.546327  318075 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:58.546330  318075 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:58.546333  318075 cri.go:96] found id: ""
	I0110 09:15:58.546385  318075 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:58.577290  318075 out.go:203] 
	W0110 09:15:58.580600  318075 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:58.580626  318075 out.go:285] * 
	* 
	W0110 09:15:58.583861  318075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:58.587019  318075 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.55s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-jkcrk" [e2fc42df-8f5d-4a51-a3df-4000a36a0262] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003362325s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable nvidia-device-plugin --alsologtostderr -v=1
2026/01/10 09:15:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (329.129227ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:48.781307  317606 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:48.782222  317606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:48.782244  317606 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:48.782251  317606 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:48.782535  317606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:48.782837  317606 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:48.783254  317606 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:48.783276  317606 addons.go:622] checking whether the cluster is paused
	I0110 09:15:48.784409  317606 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:48.784443  317606 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:48.785050  317606 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:48.819463  317606 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:48.819528  317606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:48.850051  317606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:48.966732  317606 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:48.966847  317606 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:49.006748  317606 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:49.006775  317606 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:49.006780  317606 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:49.006784  317606 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:49.006788  317606 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:49.006793  317606 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:49.006798  317606 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:49.006806  317606 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:49.006809  317606 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:49.006816  317606 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:49.006819  317606 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:49.006823  317606 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:49.006826  317606 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:49.006829  317606 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:49.006834  317606 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:49.006838  317606 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:49.006841  317606 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:49.006845  317606 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:49.006848  317606 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:49.006851  317606 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:49.006856  317606 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:49.006859  317606 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:49.006862  317606 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:49.006866  317606 cri.go:96] found id: ""
	I0110 09:15:49.006946  317606 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:49.026279  317606 out.go:203] 
	W0110 09:15:49.029268  317606 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:49.029292  317606 out.go:285] * 
	* 
	W0110 09:15:49.032885  317606 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:49.036141  317606 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-n9mns" [91438629-1617-40c2-a2e5-60822f9179f4] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003586674s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-502860 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-502860 addons disable yakd --alsologtostderr -v=1: exit status 11 (255.585032ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:15:42.495377  317458 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:15:42.496183  317458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:42.496223  317458 out.go:374] Setting ErrFile to fd 2...
	I0110 09:15:42.496246  317458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:15:42.496694  317458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:15:42.497099  317458 mustload.go:66] Loading cluster: addons-502860
	I0110 09:15:42.497766  317458 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:42.497833  317458 addons.go:622] checking whether the cluster is paused
	I0110 09:15:42.497993  317458 config.go:182] Loaded profile config "addons-502860": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:15:42.498019  317458 host.go:66] Checking if "addons-502860" exists ...
	I0110 09:15:42.498754  317458 cli_runner.go:164] Run: docker container inspect addons-502860 --format={{.State.Status}}
	I0110 09:15:42.517332  317458 ssh_runner.go:195] Run: systemctl --version
	I0110 09:15:42.517392  317458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-502860
	I0110 09:15:42.535313  317458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/addons-502860/id_rsa Username:docker}
	I0110 09:15:42.639052  317458 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:15:42.639163  317458 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:15:42.677434  317458 cri.go:96] found id: "2977a479b90a989b5730ba88d578622ae4614777de6fedd390360c442d1f64a1"
	I0110 09:15:42.677454  317458 cri.go:96] found id: "f9d31a1b0f0338a80060e314adc30ec6503b34c146b35ff2a297c2a37959d3ac"
	I0110 09:15:42.677459  317458 cri.go:96] found id: "0cb280c9473cd44ba2dcf771fb4a61328a62e218f335eff19eccb8a19a7e108d"
	I0110 09:15:42.677463  317458 cri.go:96] found id: "b9bf931eff2af8ac8d3c4930ed1307a2cb7cfef97c2a38caa53adf8cb4f0c755"
	I0110 09:15:42.677466  317458 cri.go:96] found id: "e4a16a06b247451920d221fb95ff311cb4d45c3259c08bd2301bd66b175a1ed4"
	I0110 09:15:42.677469  317458 cri.go:96] found id: "c0dbd7b37ff83cf80cbb18f77b493c3a7c37e1bcdc40915a38d7bf763e31ad33"
	I0110 09:15:42.677473  317458 cri.go:96] found id: "b18c47b6c8b09733f1ceb4708e234cd584a031bd2e5c1ea1df0599b37c2751ca"
	I0110 09:15:42.677476  317458 cri.go:96] found id: "ca6df7d28611cf7ec559914fe3b1a6769484b37c1066d5a71e7ca0c4d1c7de32"
	I0110 09:15:42.677479  317458 cri.go:96] found id: "9d7183f14934cc8af969fbe35901eb495fdbf3214efc2e304b0db26be470bc53"
	I0110 09:15:42.677486  317458 cri.go:96] found id: "0cef4c47edaba4141a270885ff5ae729e2debfffa34af6fe4da0c3e3f523ef77"
	I0110 09:15:42.677494  317458 cri.go:96] found id: "bb4fad33aabbb5711c4b477222b7edf340e039b32b09a224aacddeb74d4555ef"
	I0110 09:15:42.677497  317458 cri.go:96] found id: "8639b29498a6ca248e7fd9a2923d0e85639efa8615c9a7ab25359934c9e3e84a"
	I0110 09:15:42.677500  317458 cri.go:96] found id: "b8d7dd49e6aa3747a48ee1b0c422a5c192db8a11dc66861f135cc29b149ccebe"
	I0110 09:15:42.677507  317458 cri.go:96] found id: "0f95533a115f8256c4fea16d34f9825599af2413b6919159224be67f72f340f7"
	I0110 09:15:42.677510  317458 cri.go:96] found id: "6bdfa3d2092e58d29c2f14dc2eaa179c5d0739ffde1090d46446e59435bfcc48"
	I0110 09:15:42.677516  317458 cri.go:96] found id: "2bc8d72f2fafc2c673ffb1600a5605280cf09eef2c0323053aa44e8d81c8dd84"
	I0110 09:15:42.677520  317458 cri.go:96] found id: "41bba34c9f3b7d9f1db06ae7fa87a56b8ae0cbeecd465a5b95dcbab8f6c24a16"
	I0110 09:15:42.677524  317458 cri.go:96] found id: "2a570db330b6ac4534d0b17f1e4b97af1d98afdbd9b9df056e96afd18834b041"
	I0110 09:15:42.677527  317458 cri.go:96] found id: "eafe47e69fea7df02ca140025ebd69cdf1fab0beaf3c3c183100e62cf32c8382"
	I0110 09:15:42.677535  317458 cri.go:96] found id: "d9ac16bc6b5ab36e4898c5046729342d0c6a72eeab3e3d43778e2ae05b9ca56a"
	I0110 09:15:42.677547  317458 cri.go:96] found id: "22b7d7f5bce6b072b79cca649c0692125b8dfd579e190d2ef25c73ba71007b94"
	I0110 09:15:42.677551  317458 cri.go:96] found id: "508f986c7bd7fc6b00af3cf69dbf8cb276a4e1fa63f121355b249da793b9ac8a"
	I0110 09:15:42.677555  317458 cri.go:96] found id: "855037fdb4e98ab773293496d16e627974535589b2cad2a2a19c2f8f066869d3"
	I0110 09:15:42.677558  317458 cri.go:96] found id: ""
	I0110 09:15:42.677616  317458 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:15:42.692663  317458 out.go:203] 
	W0110 09:15:42.695518  317458 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:15:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:15:42.695548  317458 out.go:285] * 
	* 
	W0110 09:15:42.698955  317458 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:15:42.701888  317458 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-502860 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestForceSystemdFlag (502.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0110 09:59:41.352718  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m17.5715756s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-524845] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-524845" primary control-plane node in "force-systemd-flag-524845" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:58:07.553679  490351 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:58:07.553848  490351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:58:07.553862  490351 out.go:374] Setting ErrFile to fd 2...
	I0110 09:58:07.553868  490351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:58:07.554176  490351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:58:07.554702  490351 out.go:368] Setting JSON to false
	I0110 09:58:07.555848  490351 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9637,"bootTime":1768029451,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:58:07.555930  490351 start.go:143] virtualization:  
	I0110 09:58:07.559535  490351 out.go:179] * [force-systemd-flag-524845] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:58:07.563995  490351 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:58:07.564051  490351 notify.go:221] Checking for updates...
	I0110 09:58:07.570504  490351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:58:07.573594  490351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:58:07.576781  490351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:58:07.579932  490351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:58:07.582938  490351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:58:07.591723  490351 config.go:182] Loaded profile config "force-systemd-env-646877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:58:07.591896  490351 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:58:07.616596  490351 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:58:07.616787  490351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:58:07.678314  490351 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:58:07.668051789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:58:07.678429  490351 docker.go:319] overlay module found
	I0110 09:58:07.681014  490351 out.go:179] * Using the docker driver based on user configuration
	I0110 09:58:07.683322  490351 start.go:309] selected driver: docker
	I0110 09:58:07.683343  490351 start.go:928] validating driver "docker" against <nil>
	I0110 09:58:07.683358  490351 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:58:07.684110  490351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:58:07.736389  490351 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:58:07.726925717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:58:07.736620  490351 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:58:07.736842  490351 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:58:07.739329  490351 out.go:179] * Using Docker driver with root privileges
	I0110 09:58:07.741743  490351 cni.go:84] Creating CNI manager for ""
	I0110 09:58:07.741819  490351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:58:07.741839  490351 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:58:07.741921  490351 start.go:353] cluster config:
	{Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:58:07.744819  490351 out.go:179] * Starting "force-systemd-flag-524845" primary control-plane node in "force-systemd-flag-524845" cluster
	I0110 09:58:07.747245  490351 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:58:07.749876  490351 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:58:07.752571  490351 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:58:07.752619  490351 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:58:07.752630  490351 cache.go:65] Caching tarball of preloaded images
	I0110 09:58:07.752657  490351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:58:07.752727  490351 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 09:58:07.752738  490351 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 09:58:07.752837  490351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/config.json ...
	I0110 09:58:07.752855  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/config.json: {Name:mkc575e6211f64f692579bcfde7f5500b6e9ddb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:07.778196  490351 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:58:07.778220  490351 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:58:07.778237  490351 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:58:07.778269  490351 start.go:360] acquireMachinesLock for force-systemd-flag-524845: {Name:mkd6a15301a8cdc65884d926e54f9d5744e40d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:58:07.778383  490351 start.go:364] duration metric: took 93.573µs to acquireMachinesLock for "force-systemd-flag-524845"
	I0110 09:58:07.778415  490351 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:58:07.778480  490351 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:58:07.781858  490351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 09:58:07.782098  490351 start.go:159] libmachine.API.Create for "force-systemd-flag-524845" (driver="docker")
	I0110 09:58:07.782134  490351 client.go:173] LocalClient.Create starting
	I0110 09:58:07.782206  490351 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 09:58:07.782245  490351 main.go:144] libmachine: Decoding PEM data...
	I0110 09:58:07.782264  490351 main.go:144] libmachine: Parsing certificate...
	I0110 09:58:07.782329  490351 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 09:58:07.782351  490351 main.go:144] libmachine: Decoding PEM data...
	I0110 09:58:07.782362  490351 main.go:144] libmachine: Parsing certificate...
	I0110 09:58:07.782738  490351 cli_runner.go:164] Run: docker network inspect force-systemd-flag-524845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:58:07.799057  490351 cli_runner.go:211] docker network inspect force-systemd-flag-524845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:58:07.799138  490351 network_create.go:284] running [docker network inspect force-systemd-flag-524845] to gather additional debugging logs...
	I0110 09:58:07.799162  490351 cli_runner.go:164] Run: docker network inspect force-systemd-flag-524845
	W0110 09:58:07.815154  490351 cli_runner.go:211] docker network inspect force-systemd-flag-524845 returned with exit code 1
	I0110 09:58:07.815185  490351 network_create.go:287] error running [docker network inspect force-systemd-flag-524845]: docker network inspect force-systemd-flag-524845: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-524845 not found
	I0110 09:58:07.815205  490351 network_create.go:289] output of [docker network inspect force-systemd-flag-524845]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-524845 not found
	
	** /stderr **
	I0110 09:58:07.815297  490351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:58:07.832656  490351 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 09:58:07.833146  490351 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 09:58:07.833394  490351 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 09:58:07.833681  490351 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c97ab4c75741 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:39:bd:f9:f1:fc} reservation:<nil>}
	I0110 09:58:07.834131  490351 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9c10}
	I0110 09:58:07.834152  490351 network_create.go:124] attempt to create docker network force-systemd-flag-524845 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 09:58:07.834221  490351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-524845 force-systemd-flag-524845
	I0110 09:58:07.892173  490351 network_create.go:108] docker network force-systemd-flag-524845 192.168.85.0/24 created
	I0110 09:58:07.892207  490351 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-524845" container
	I0110 09:58:07.892281  490351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:58:07.933086  490351 cli_runner.go:164] Run: docker volume create force-systemd-flag-524845 --label name.minikube.sigs.k8s.io=force-systemd-flag-524845 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:58:07.959040  490351 oci.go:103] Successfully created a docker volume force-systemd-flag-524845
	I0110 09:58:07.959128  490351 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-524845-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-524845 --entrypoint /usr/bin/test -v force-systemd-flag-524845:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:58:08.499808  490351 oci.go:107] Successfully prepared a docker volume force-systemd-flag-524845
	I0110 09:58:08.499893  490351 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:58:08.499911  490351 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:58:08.500007  490351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-524845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:58:12.385731  490351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-524845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885683498s)
	I0110 09:58:12.385763  490351 kic.go:203] duration metric: took 3.885849941s to extract preloaded images to volume ...
	W0110 09:58:12.385889  490351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:58:12.386026  490351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:58:12.467213  490351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-524845 --name force-systemd-flag-524845 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-524845 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-524845 --network force-systemd-flag-524845 --ip 192.168.85.2 --volume force-systemd-flag-524845:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:58:12.784803  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Running}}
	I0110 09:58:12.803015  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Status}}
	I0110 09:58:12.824900  490351 cli_runner.go:164] Run: docker exec force-systemd-flag-524845 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:58:12.886539  490351 oci.go:144] the created container "force-systemd-flag-524845" has a running status.
	I0110 09:58:12.886567  490351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa...
	I0110 09:58:13.322259  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 09:58:13.322346  490351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:58:13.360073  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Status}}
	I0110 09:58:13.386765  490351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:58:13.386784  490351 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-524845 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:58:13.446974  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Status}}
	I0110 09:58:13.470135  490351 machine.go:94] provisionDockerMachine start ...
	I0110 09:58:13.470238  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:13.497722  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:13.498048  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:13.498058  490351 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:58:13.725498  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-524845
	
	I0110 09:58:13.725565  490351 ubuntu.go:182] provisioning hostname "force-systemd-flag-524845"
	I0110 09:58:13.725660  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:13.744097  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:13.744429  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:13.744440  490351 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-524845 && echo "force-systemd-flag-524845" | sudo tee /etc/hostname
	I0110 09:58:13.909634  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-524845
	
	I0110 09:58:13.909728  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:13.932967  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:13.933272  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:13.933293  490351 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-524845' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-524845/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-524845' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:58:14.108892  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:58:14.108916  490351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 09:58:14.108934  490351 ubuntu.go:190] setting up certificates
	I0110 09:58:14.108953  490351 provision.go:84] configureAuth start
	I0110 09:58:14.109014  490351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-524845
	I0110 09:58:14.125630  490351 provision.go:143] copyHostCerts
	I0110 09:58:14.125675  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:58:14.125712  490351 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 09:58:14.125724  490351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:58:14.125802  490351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 09:58:14.125885  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:58:14.125912  490351 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 09:58:14.125920  490351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:58:14.125948  490351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 09:58:14.125992  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:58:14.126012  490351 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 09:58:14.126022  490351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:58:14.126049  490351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 09:58:14.126106  490351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-524845 san=[127.0.0.1 192.168.85.2 force-systemd-flag-524845 localhost minikube]
	I0110 09:58:14.560742  490351 provision.go:177] copyRemoteCerts
	I0110 09:58:14.560814  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:58:14.560859  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:14.578654  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:14.685851  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 09:58:14.685911  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 09:58:14.707728  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 09:58:14.707788  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 09:58:14.731525  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 09:58:14.731597  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 09:58:14.749546  490351 provision.go:87] duration metric: took 640.570546ms to configureAuth
	I0110 09:58:14.749575  490351 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:58:14.749758  490351 config.go:182] Loaded profile config "force-systemd-flag-524845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:58:14.749869  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:14.767063  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:14.767380  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:14.767402  490351 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 09:58:15.105106  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 09:58:15.105131  490351 machine.go:97] duration metric: took 1.634977324s to provisionDockerMachine
	I0110 09:58:15.105143  490351 client.go:176] duration metric: took 7.32300228s to LocalClient.Create
	I0110 09:58:15.105154  490351 start.go:167] duration metric: took 7.323057576s to libmachine.API.Create "force-systemd-flag-524845"
	I0110 09:58:15.105162  490351 start.go:293] postStartSetup for "force-systemd-flag-524845" (driver="docker")
	I0110 09:58:15.105174  490351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:58:15.105245  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:58:15.105292  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.123800  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.228404  490351 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:58:15.231553  490351 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:58:15.231581  490351 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:58:15.231593  490351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 09:58:15.231648  490351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 09:58:15.231736  490351 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 09:58:15.231748  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> /etc/ssl/certs/3098982.pem
	I0110 09:58:15.231858  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:58:15.238992  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:58:15.256546  490351 start.go:296] duration metric: took 151.368464ms for postStartSetup
	I0110 09:58:15.256951  490351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-524845
	I0110 09:58:15.273760  490351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/config.json ...
	I0110 09:58:15.274042  490351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:58:15.274095  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.289696  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.389886  490351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:58:15.394911  490351 start.go:128] duration metric: took 7.616415969s to createHost
	I0110 09:58:15.394940  490351 start.go:83] releasing machines lock for "force-systemd-flag-524845", held for 7.616542058s
	I0110 09:58:15.395015  490351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-524845
	I0110 09:58:15.420582  490351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:58:15.420672  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.420827  490351 ssh_runner.go:195] Run: cat /version.json
	I0110 09:58:15.420859  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.453334  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.453867  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.676978  490351 ssh_runner.go:195] Run: systemctl --version
	I0110 09:58:15.683523  490351 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 09:58:15.722378  490351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:58:15.727020  490351 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:58:15.727146  490351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:58:15.756451  490351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:58:15.756482  490351 start.go:496] detecting cgroup driver to use...
	I0110 09:58:15.756530  490351 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 09:58:15.756623  490351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 09:58:15.775531  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 09:58:15.788613  490351 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:58:15.788677  490351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:58:15.805159  490351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:58:15.824052  490351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:58:15.948607  490351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:58:16.076637  490351 docker.go:234] disabling docker service ...
	I0110 09:58:16.076764  490351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:58:16.099692  490351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:58:16.113980  490351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:58:16.245873  490351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:58:16.360721  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:58:16.374259  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:58:16.388020  490351 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 09:58:16.388089  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.397382  490351 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 09:58:16.397461  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.406597  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.415927  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.425226  490351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:58:16.434195  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.443064  490351 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.456142  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.465329  490351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:58:16.473497  490351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:58:16.481003  490351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:58:16.606702  490351 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 09:58:16.783719  490351 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 09:58:16.783789  490351 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 09:58:16.787356  490351 start.go:574] Will wait 60s for crictl version
	I0110 09:58:16.787417  490351 ssh_runner.go:195] Run: which crictl
	I0110 09:58:16.790756  490351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:58:16.817454  490351 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 09:58:16.817566  490351 ssh_runner.go:195] Run: crio --version
	I0110 09:58:16.847420  490351 ssh_runner.go:195] Run: crio --version
	I0110 09:58:16.881854  490351 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 09:58:16.884738  490351 cli_runner.go:164] Run: docker network inspect force-systemd-flag-524845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:58:16.905702  490351 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 09:58:16.910503  490351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:58:16.924072  490351 kubeadm.go:884] updating cluster {Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:58:16.924190  490351 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:58:16.924255  490351 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:58:16.968317  490351 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:58:16.968344  490351 crio.go:433] Images already preloaded, skipping extraction
	I0110 09:58:16.968414  490351 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:58:16.994323  490351 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:58:16.994345  490351 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:58:16.994354  490351 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 09:58:16.994443  490351 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-524845 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:58:16.994529  490351 ssh_runner.go:195] Run: crio config
	I0110 09:58:17.056576  490351 cni.go:84] Creating CNI manager for ""
	I0110 09:58:17.056647  490351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:58:17.056682  490351 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:58:17.056738  490351 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-524845 NodeName:force-systemd-flag-524845 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:58:17.056900  490351 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-524845"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:58:17.057007  490351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:58:17.064554  490351 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:58:17.064628  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:58:17.071927  490351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0110 09:58:17.084201  490351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:58:17.097109  490351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0110 09:58:17.110390  490351 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:58:17.114022  490351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:58:17.123933  490351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:58:17.230956  490351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:58:17.246838  490351 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845 for IP: 192.168.85.2
	I0110 09:58:17.246859  490351 certs.go:195] generating shared ca certs ...
	I0110 09:58:17.246875  490351 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.247059  490351 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 09:58:17.247123  490351 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 09:58:17.247139  490351 certs.go:257] generating profile certs ...
	I0110 09:58:17.247216  490351 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.key
	I0110 09:58:17.247252  490351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.crt with IP's: []
	I0110 09:58:17.425377  490351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.crt ...
	I0110 09:58:17.425412  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.crt: {Name:mk518a35dd190d1c13e274a186c46aac0b65c0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.425662  490351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.key ...
	I0110 09:58:17.425681  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.key: {Name:mkb14258b269a57e40590de8cc644162f2c9e79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.425801  490351 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff
	I0110 09:58:17.425823  490351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 09:58:17.564816  490351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff ...
	I0110 09:58:17.564845  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff: {Name:mka8f91ace66a4d1d3ed424ff0e0eec71041a342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.565034  490351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff ...
	I0110 09:58:17.565047  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff: {Name:mk6b93fc5491976a4f4cf76c3f017ba1495719dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.565137  490351 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt
	I0110 09:58:17.565217  490351 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key
	I0110 09:58:17.565282  490351 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key
	I0110 09:58:17.565301  490351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt with IP's: []
	I0110 09:58:17.900229  490351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt ...
	I0110 09:58:17.900259  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt: {Name:mk45acd75c334a72bca2f45577d944d855bffc29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.900443  490351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key ...
	I0110 09:58:17.900460  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key: {Name:mk53245ff56aaacc920494468e348c6e8626f813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.900580  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 09:58:17.900604  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 09:58:17.900616  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 09:58:17.900633  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 09:58:17.900645  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 09:58:17.900661  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 09:58:17.900676  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 09:58:17.900692  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 09:58:17.900740  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 09:58:17.900784  490351 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 09:58:17.900793  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:58:17.900819  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 09:58:17.900848  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:58:17.900877  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 09:58:17.900929  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:58:17.900964  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem -> /usr/share/ca-certificates/309898.pem
	I0110 09:58:17.900980  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> /usr/share/ca-certificates/3098982.pem
	I0110 09:58:17.900997  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:17.901579  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:58:17.919395  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 09:58:17.937982  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:58:17.956115  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:58:17.973123  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 09:58:17.990988  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 09:58:18.009880  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:58:18.029805  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 09:58:18.048374  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 09:58:18.067165  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 09:58:18.084766  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:58:18.103156  490351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:58:18.116152  490351 ssh_runner.go:195] Run: openssl version
	I0110 09:58:18.122650  490351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.130503  490351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 09:58:18.137994  490351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.141787  490351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.141854  490351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.183241  490351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:58:18.191230  490351 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 09:58:18.199053  490351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.206896  490351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 09:58:18.214984  490351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.219584  490351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.219728  490351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.265971  490351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:58:18.273397  490351 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 09:58:18.280861  490351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.288075  490351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:58:18.295540  490351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.299175  490351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.299242  490351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.340759  490351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:58:18.348469  490351 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:58:18.356224  490351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:58:18.359904  490351 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:58:18.359962  490351 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:58:18.360049  490351 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:58:18.360114  490351 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:58:18.387471  490351 cri.go:96] found id: ""
	I0110 09:58:18.387553  490351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:58:18.395437  490351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:58:18.404156  490351 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:58:18.404231  490351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:58:18.416048  490351 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:58:18.416070  490351 kubeadm.go:158] found existing configuration files:
	
	I0110 09:58:18.416123  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:58:18.425704  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:58:18.425773  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:58:18.433986  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:58:18.442886  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:58:18.442959  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:58:18.450880  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:58:18.459668  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:58:18.459733  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:58:18.468048  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:58:18.476924  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:58:18.477005  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:58:18.484485  490351 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:58:18.607969  490351 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:58:18.608397  490351 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:58:18.674428  490351 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:02:21.842360  490351 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:02:21.842394  490351 kubeadm.go:319] 
	I0110 10:02:21.842516  490351 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:02:21.848886  490351 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:02:21.849075  490351 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:02:21.849219  490351 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:02:21.852606  490351 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:02:21.852695  490351 kubeadm.go:319] OS: Linux
	I0110 10:02:21.852754  490351 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:02:21.852807  490351 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:02:21.852857  490351 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:02:21.852908  490351 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:02:21.852959  490351 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:02:21.853011  490351 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:02:21.853060  490351 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:02:21.853110  490351 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:02:21.853159  490351 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:02:21.853236  490351 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:02:21.853338  490351 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:02:21.853434  490351 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:02:21.853501  490351 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:02:21.856776  490351 out.go:252]   - Generating certificates and keys ...
	I0110 10:02:21.856870  490351 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:02:21.856939  490351 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:02:21.857011  490351 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:02:21.857072  490351 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:02:21.857137  490351 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:02:21.857190  490351 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:02:21.857247  490351 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:02:21.857386  490351 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:02:21.857442  490351 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:02:21.857577  490351 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:02:21.857647  490351 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:02:21.857714  490351 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:02:21.857762  490351 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:02:21.857821  490351 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:02:21.857876  490351 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:02:21.857941  490351 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:02:21.857999  490351 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:02:21.858066  490351 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:02:21.858125  490351 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:02:21.858212  490351 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:02:21.858282  490351 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:02:21.861182  490351 out.go:252]   - Booting up control plane ...
	I0110 10:02:21.861340  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:02:21.861473  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:02:21.861592  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:02:21.861750  490351 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:02:21.861895  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:02:21.862059  490351 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:02:21.862188  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:02:21.862264  490351 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:02:21.862452  490351 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:02:21.862605  490351 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:02:21.862748  490351 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001342949s
	I0110 10:02:21.862806  490351 kubeadm.go:319] 
	I0110 10:02:21.862878  490351 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:02:21.862914  490351 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:02:21.863025  490351 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:02:21.863030  490351 kubeadm.go:319] 
	I0110 10:02:21.863142  490351 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:02:21.863176  490351 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:02:21.863208  490351 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:02:21.863213  490351 kubeadm.go:319] 
	W0110 10:02:21.863327  490351 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001342949s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001342949s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 10:02:21.863399  490351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 10:02:22.326865  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:02:22.351207  490351 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:02:22.351267  490351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:02:22.364009  490351 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:02:22.364028  490351 kubeadm.go:158] found existing configuration files:
	
	I0110 10:02:22.364080  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:02:22.377483  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:02:22.377599  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:02:22.387938  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:02:22.399386  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:02:22.399500  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:02:22.407042  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:02:22.421172  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:02:22.421285  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:02:22.432909  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:02:22.444516  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:02:22.444637  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:02:22.457744  490351 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:02:22.529233  490351 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:02:22.529354  490351 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:02:22.663263  490351 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:02:22.663411  490351 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:02:22.663492  490351 kubeadm.go:319] OS: Linux
	I0110 10:02:22.663563  490351 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:02:22.663670  490351 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:02:22.663761  490351 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:02:22.663841  490351 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:02:22.663923  490351 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:02:22.663995  490351 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:02:22.664045  490351 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:02:22.664096  490351 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:02:22.664152  490351 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:02:22.789192  490351 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:02:22.789359  490351 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:02:22.789481  490351 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:02:22.807386  490351 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:02:22.812689  490351 out.go:252]   - Generating certificates and keys ...
	I0110 10:02:22.812850  490351 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:02:22.812970  490351 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:02:22.813528  490351 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 10:02:22.814201  490351 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 10:02:22.814820  490351 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 10:02:22.821187  490351 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 10:02:22.822578  490351 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 10:02:22.832829  490351 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 10:02:22.832919  490351 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 10:02:22.832992  490351 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 10:02:22.833030  490351 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 10:02:22.833085  490351 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:02:23.192856  490351 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:02:23.508049  490351 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:02:23.719185  490351 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:02:24.061850  490351 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:02:24.248896  490351 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:02:24.248995  490351 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:02:24.249547  490351 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:02:24.252594  490351 out.go:252]   - Booting up control plane ...
	I0110 10:02:24.252697  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:02:24.252776  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:02:24.253741  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:02:24.281294  490351 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:02:24.281402  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:02:24.289897  490351 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:02:24.289997  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:02:24.290037  490351 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:02:24.511360  490351 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:02:24.511481  490351 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:06:24.511159  490351 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000197132s
	I0110 10:06:24.511511  490351 kubeadm.go:319] 
	I0110 10:06:24.511636  490351 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:06:24.511700  490351 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:06:24.512040  490351 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:06:24.512061  490351 kubeadm.go:319] 
	I0110 10:06:24.512559  490351 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:06:24.512619  490351 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:06:24.512672  490351 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:06:24.512682  490351 kubeadm.go:319] 
	I0110 10:06:24.521231  490351 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:06:24.521661  490351 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:06:24.521769  490351 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:06:24.522007  490351 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:06:24.522013  490351 kubeadm.go:319] 
	I0110 10:06:24.522081  490351 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:06:24.522133  490351 kubeadm.go:403] duration metric: took 8m6.162176446s to StartCluster
	I0110 10:06:24.522165  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 10:06:24.522225  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 10:06:24.563945  490351 cri.go:96] found id: ""
	I0110 10:06:24.563983  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.563992  490351 logs.go:284] No container was found matching "kube-apiserver"
	I0110 10:06:24.563998  490351 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 10:06:24.564069  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 10:06:24.593424  490351 cri.go:96] found id: ""
	I0110 10:06:24.593446  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.593455  490351 logs.go:284] No container was found matching "etcd"
	I0110 10:06:24.593461  490351 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 10:06:24.593518  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 10:06:24.624042  490351 cri.go:96] found id: ""
	I0110 10:06:24.624064  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.624073  490351 logs.go:284] No container was found matching "coredns"
	I0110 10:06:24.624078  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 10:06:24.624204  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 10:06:24.663067  490351 cri.go:96] found id: ""
	I0110 10:06:24.663091  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.663100  490351 logs.go:284] No container was found matching "kube-scheduler"
	I0110 10:06:24.663114  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 10:06:24.663175  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 10:06:24.696647  490351 cri.go:96] found id: ""
	I0110 10:06:24.696712  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.696737  490351 logs.go:284] No container was found matching "kube-proxy"
	I0110 10:06:24.696760  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 10:06:24.696848  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 10:06:24.732584  490351 cri.go:96] found id: ""
	I0110 10:06:24.732607  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.732615  490351 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 10:06:24.732622  490351 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 10:06:24.732682  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 10:06:24.761352  490351 cri.go:96] found id: ""
	I0110 10:06:24.761374  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.761383  490351 logs.go:284] No container was found matching "kindnet"
	I0110 10:06:24.761392  490351 logs.go:123] Gathering logs for kubelet ...
	I0110 10:06:24.761404  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 10:06:24.835028  490351 logs.go:123] Gathering logs for dmesg ...
	I0110 10:06:24.835065  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 10:06:24.858035  490351 logs.go:123] Gathering logs for describe nodes ...
	I0110 10:06:24.858067  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 10:06:24.975206  490351 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 10:06:24.966612    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.967380    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969060    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969578    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.971230    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 10:06:24.966612    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.967380    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969060    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969578    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.971230    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 10:06:24.975227  490351 logs.go:123] Gathering logs for CRI-O ...
	I0110 10:06:24.975239  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0110 10:06:25.009714  490351 logs.go:123] Gathering logs for container status ...
	I0110 10:06:25.009760  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0110 10:06:25.051839  490351 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 10:06:25.051895  490351 out.go:285] * 
	* 
	W0110 10:06:25.051957  490351 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:06:25.051976  490351 out.go:285] * 
	* 
	W0110 10:06:25.052224  490351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:06:25.057655  490351 out.go:203] 
	W0110 10:06:25.061372  490351 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:06:25.061515  490351 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 10:06:25.061577  490351 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 10:06:25.064625  490351 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-524845 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 10:06:25.626843398 +0000 UTC m=+3206.853177629
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-524845
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-524845:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc",
	        "Created": "2026-01-10T09:58:12.488398519Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T09:58:12.559386805Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc/hosts",
	        "LogPath": "/var/lib/docker/containers/4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc/4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc-json.log",
	        "Name": "/force-systemd-flag-524845",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-524845:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-524845",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4936e905b38e2df01650ea44b9b167308b1b1d2e24aa448ce36cfb82918fabfc",
	                "LowerDir": "/var/lib/docker/overlay2/b26b43e3c1c9705e2f4871daeb7c8c830daa8d23642e4cda1657e830c391761f-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b26b43e3c1c9705e2f4871daeb7c8c830daa8d23642e4cda1657e830c391761f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b26b43e3c1c9705e2f4871daeb7c8c830daa8d23642e4cda1657e830c391761f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b26b43e3c1c9705e2f4871daeb7c8c830daa8d23642e4cda1657e830c391761f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-524845",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-524845/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-524845",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-524845",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-524845",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c96e884d68973af1c95ab020e07215e6eb6c056bd5918ee52dcc6d27a6d87b2",
	            "SandboxKey": "/var/run/docker/netns/0c96e884d689",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-524845": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:5c:ff:c5:17:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "012be6601f8a1fd1765f1a2777ee0b1b76b584053fb0349e41486cab7e933573",
	                    "EndpointID": "8c203d722fd8dcef33d2e3839ad043519ef55078bf782c94fe8ef230315625aa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-524845",
	                        "4936e905b38e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-524845 -n force-systemd-flag-524845
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-524845 -n force-systemd-flag-524845: exit status 6 (537.860154ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 10:06:26.174423  516949 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-524845" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-524845 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p force-systemd-flag-524845 logs -n 25: (1.208060331s)
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333        │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:05:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:05:56.308911  514451 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:05:56.309087  514451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:05:56.309116  514451 out.go:374] Setting ErrFile to fd 2...
	I0110 10:05:56.309136  514451 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:05:56.314136  514451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:05:56.315105  514451 out.go:368] Setting JSON to false
	I0110 10:05:56.315970  514451 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10106,"bootTime":1768029451,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:05:56.316084  514451 start.go:143] virtualization:  
	I0110 10:05:56.320055  514451 out.go:179] * [embed-certs-219333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:05:56.324279  514451 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:05:56.324429  514451 notify.go:221] Checking for updates...
	I0110 10:05:56.330750  514451 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:05:56.333861  514451 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:05:56.337008  514451 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:05:56.340099  514451 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:05:56.343070  514451 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:05:56.346667  514451 config.go:182] Loaded profile config "force-systemd-flag-524845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:05:56.346820  514451 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:05:56.370105  514451 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:05:56.370213  514451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:05:56.433938  514451 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:05:56.424647953 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:05:56.434051  514451 docker.go:319] overlay module found
	I0110 10:05:56.437369  514451 out.go:179] * Using the docker driver based on user configuration
	I0110 10:05:56.440370  514451 start.go:309] selected driver: docker
	I0110 10:05:56.440403  514451 start.go:928] validating driver "docker" against <nil>
	I0110 10:05:56.440424  514451 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:05:56.441211  514451 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:05:56.495142  514451 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:05:56.485859991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:05:56.495316  514451 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:05:56.495543  514451 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:05:56.498567  514451 out.go:179] * Using Docker driver with root privileges
	I0110 10:05:56.501467  514451 cni.go:84] Creating CNI manager for ""
	I0110 10:05:56.501542  514451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:05:56.501557  514451 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:05:56.501646  514451 start.go:353] cluster config:
	{Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:05:56.504885  514451 out.go:179] * Starting "embed-certs-219333" primary control-plane node in "embed-certs-219333" cluster
	I0110 10:05:56.507851  514451 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:05:56.510910  514451 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:05:56.513766  514451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:05:56.513818  514451 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:05:56.513828  514451 cache.go:65] Caching tarball of preloaded images
	I0110 10:05:56.513858  514451 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:05:56.513915  514451 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:05:56.513925  514451 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:05:56.514038  514451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/config.json ...
	I0110 10:05:56.514054  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/config.json: {Name:mk6e5519b07937b4925f144b253441d9d119a64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:05:56.533955  514451 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:05:56.533975  514451 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:05:56.533990  514451 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:05:56.534026  514451 start.go:360] acquireMachinesLock for embed-certs-219333: {Name:mk194110ed8c34314eec25e22167b583e391cf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:05:56.534128  514451 start.go:364] duration metric: took 86.081µs to acquireMachinesLock for "embed-certs-219333"
	I0110 10:05:56.534154  514451 start.go:93] Provisioning new machine with config: &{Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:05:56.534224  514451 start.go:125] createHost starting for "" (driver="docker")
	I0110 10:05:56.537795  514451 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:05:56.538037  514451 start.go:159] libmachine.API.Create for "embed-certs-219333" (driver="docker")
	I0110 10:05:56.538075  514451 client.go:173] LocalClient.Create starting
	I0110 10:05:56.538148  514451 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:05:56.538188  514451 main.go:144] libmachine: Decoding PEM data...
	I0110 10:05:56.538216  514451 main.go:144] libmachine: Parsing certificate...
	I0110 10:05:56.538278  514451 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:05:56.538303  514451 main.go:144] libmachine: Decoding PEM data...
	I0110 10:05:56.538318  514451 main.go:144] libmachine: Parsing certificate...
	I0110 10:05:56.538694  514451 cli_runner.go:164] Run: docker network inspect embed-certs-219333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:05:56.554821  514451 cli_runner.go:211] docker network inspect embed-certs-219333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:05:56.554911  514451 network_create.go:284] running [docker network inspect embed-certs-219333] to gather additional debugging logs...
	I0110 10:05:56.554935  514451 cli_runner.go:164] Run: docker network inspect embed-certs-219333
	W0110 10:05:56.570549  514451 cli_runner.go:211] docker network inspect embed-certs-219333 returned with exit code 1
	I0110 10:05:56.570584  514451 network_create.go:287] error running [docker network inspect embed-certs-219333]: docker network inspect embed-certs-219333: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-219333 not found
	I0110 10:05:56.570603  514451 network_create.go:289] output of [docker network inspect embed-certs-219333]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-219333 not found
	
	** /stderr **
	I0110 10:05:56.570726  514451 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:05:56.586763  514451 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:05:56.587156  514451 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:05:56.587401  514451 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:05:56.587827  514451 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f0410}
	I0110 10:05:56.587857  514451 network_create.go:124] attempt to create docker network embed-certs-219333 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 10:05:56.587914  514451 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-219333 embed-certs-219333
	I0110 10:05:56.645332  514451 network_create.go:108] docker network embed-certs-219333 192.168.76.0/24 created
	I0110 10:05:56.645366  514451 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-219333" container
	I0110 10:05:56.645438  514451 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:05:56.661163  514451 cli_runner.go:164] Run: docker volume create embed-certs-219333 --label name.minikube.sigs.k8s.io=embed-certs-219333 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:05:56.682725  514451 oci.go:103] Successfully created a docker volume embed-certs-219333
	I0110 10:05:56.682821  514451 cli_runner.go:164] Run: docker run --rm --name embed-certs-219333-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-219333 --entrypoint /usr/bin/test -v embed-certs-219333:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:05:57.220416  514451 oci.go:107] Successfully prepared a docker volume embed-certs-219333
	I0110 10:05:57.220477  514451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:05:57.220489  514451 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:05:57.220595  514451 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-219333:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:06:01.189117  514451 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-219333:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.96847955s)
	I0110 10:06:01.189155  514451 kic.go:203] duration metric: took 3.968661969s to extract preloaded images to volume ...
	W0110 10:06:01.189313  514451 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:06:01.189428  514451 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:06:01.245069  514451 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-219333 --name embed-certs-219333 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-219333 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-219333 --network embed-certs-219333 --ip 192.168.76.2 --volume embed-certs-219333:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 10:06:01.558234  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Running}}
	I0110 10:06:01.581973  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:06:01.605611  514451 cli_runner.go:164] Run: docker exec embed-certs-219333 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:06:01.660426  514451 oci.go:144] the created container "embed-certs-219333" has a running status.
	I0110 10:06:01.660457  514451 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa...
	I0110 10:06:02.692985  514451 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:06:02.715196  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:06:02.736889  514451 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:06:02.736908  514451 kic_runner.go:114] Args: [docker exec --privileged embed-certs-219333 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:06:02.785457  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:06:02.804434  514451 machine.go:94] provisionDockerMachine start ...
	I0110 10:06:02.804699  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:02.825355  514451 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:02.825690  514451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I0110 10:06:02.825705  514451 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:06:02.983962  514451 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-219333
	
	I0110 10:06:02.983988  514451 ubuntu.go:182] provisioning hostname "embed-certs-219333"
	I0110 10:06:02.984091  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:03.001366  514451 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:03.001681  514451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I0110 10:06:03.001697  514451 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-219333 && echo "embed-certs-219333" | sudo tee /etc/hostname
	I0110 10:06:03.166258  514451 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-219333
	
	I0110 10:06:03.166426  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:03.183988  514451 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:03.184309  514451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I0110 10:06:03.184339  514451 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-219333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-219333/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-219333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:06:03.332663  514451 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:06:03.332702  514451 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:06:03.332722  514451 ubuntu.go:190] setting up certificates
	I0110 10:06:03.332732  514451 provision.go:84] configureAuth start
	I0110 10:06:03.332795  514451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-219333
	I0110 10:06:03.350335  514451 provision.go:143] copyHostCerts
	I0110 10:06:03.350405  514451 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:06:03.350418  514451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:06:03.350496  514451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:06:03.350599  514451 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:06:03.350609  514451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:06:03.350636  514451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:06:03.350705  514451 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:06:03.350714  514451 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:06:03.350739  514451 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:06:03.350799  514451 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.embed-certs-219333 san=[127.0.0.1 192.168.76.2 embed-certs-219333 localhost minikube]
	I0110 10:06:03.711169  514451 provision.go:177] copyRemoteCerts
	I0110 10:06:03.711243  514451 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:06:03.711296  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:03.728682  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:03.833604  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:06:03.850951  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:06:03.869155  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:06:03.886634  514451 provision.go:87] duration metric: took 553.876683ms to configureAuth
	I0110 10:06:03.886664  514451 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:06:03.886850  514451 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:06:03.886969  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:03.904909  514451 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:03.905217  514451 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33439 <nil> <nil>}
	I0110 10:06:03.905237  514451 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:06:04.201615  514451 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:06:04.201635  514451 machine.go:97] duration metric: took 1.397180482s to provisionDockerMachine
	I0110 10:06:04.201645  514451 client.go:176] duration metric: took 7.663560031s to LocalClient.Create
	I0110 10:06:04.201659  514451 start.go:167] duration metric: took 7.663623097s to libmachine.API.Create "embed-certs-219333"
	I0110 10:06:04.201667  514451 start.go:293] postStartSetup for "embed-certs-219333" (driver="docker")
	I0110 10:06:04.201677  514451 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:06:04.201751  514451 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:06:04.201795  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:04.218656  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:04.320861  514451 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:06:04.324169  514451 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:06:04.324202  514451 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:06:04.324215  514451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:06:04.324270  514451 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:06:04.324373  514451 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:06:04.324481  514451 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:06:04.332304  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:06:04.350561  514451 start.go:296] duration metric: took 148.879354ms for postStartSetup
	I0110 10:06:04.350938  514451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-219333
	I0110 10:06:04.367783  514451 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/config.json ...
	I0110 10:06:04.368058  514451 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:06:04.368115  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:04.384909  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:04.493692  514451 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:06:04.498677  514451 start.go:128] duration metric: took 7.964440274s to createHost
	I0110 10:06:04.498703  514451 start.go:83] releasing machines lock for "embed-certs-219333", held for 7.964565478s
	I0110 10:06:04.498784  514451 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-219333
	I0110 10:06:04.515652  514451 ssh_runner.go:195] Run: cat /version.json
	I0110 10:06:04.515718  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:04.515969  514451 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:06:04.516022  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:04.537407  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:04.556042  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:04.648423  514451 ssh_runner.go:195] Run: systemctl --version
	I0110 10:06:04.752004  514451 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:06:04.787498  514451 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:06:04.792411  514451 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:06:04.792485  514451 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:06:04.821683  514451 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:06:04.821708  514451 start.go:496] detecting cgroup driver to use...
	I0110 10:06:04.821749  514451 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:06:04.821803  514451 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:06:04.840856  514451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:06:04.854150  514451 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:06:04.854224  514451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:06:04.873286  514451 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:06:04.893131  514451 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:06:05.043941  514451 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:06:05.192114  514451 docker.go:234] disabling docker service ...
	I0110 10:06:05.192275  514451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:06:05.217554  514451 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:06:05.235392  514451 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:06:05.363641  514451 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:06:05.517158  514451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:06:05.533067  514451 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:06:05.549522  514451 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:06:05.549591  514451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.559551  514451 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:06:05.559640  514451 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.569516  514451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.578715  514451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.587615  514451 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:06:05.596052  514451 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.605977  514451 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.619982  514451 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:05.629234  514451 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:06:05.637162  514451 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:06:05.644700  514451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:06:05.774687  514451 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:06:05.936207  514451 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:06:05.936344  514451 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:06:05.940694  514451 start.go:574] Will wait 60s for crictl version
	I0110 10:06:05.940829  514451 ssh_runner.go:195] Run: which crictl
	I0110 10:06:05.944762  514451 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:06:05.972915  514451 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:06:05.973053  514451 ssh_runner.go:195] Run: crio --version
	I0110 10:06:06.003958  514451 ssh_runner.go:195] Run: crio --version
	I0110 10:06:06.041407  514451 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:06:06.044250  514451 cli_runner.go:164] Run: docker network inspect embed-certs-219333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:06:06.061645  514451 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:06:06.065564  514451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:06:06.075829  514451 kubeadm.go:884] updating cluster {Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:06:06.075951  514451 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:06:06.076011  514451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:06:06.112757  514451 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:06:06.112787  514451 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:06:06.112846  514451 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:06:06.143123  514451 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:06:06.143147  514451 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:06:06.143155  514451 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:06:06.143273  514451 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-219333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:06:06.143378  514451 ssh_runner.go:195] Run: crio config
	I0110 10:06:06.205565  514451 cni.go:84] Creating CNI manager for ""
	I0110 10:06:06.205588  514451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:06:06.205610  514451 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:06:06.205656  514451 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-219333 NodeName:embed-certs-219333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:06:06.205827  514451 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-219333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:06:06.205916  514451 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:06:06.213995  514451 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:06:06.214073  514451 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:06:06.222199  514451 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0110 10:06:06.235327  514451 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:06:06.249060  514451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0110 10:06:06.262403  514451 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:06:06.266060  514451 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:06:06.275772  514451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:06:06.392681  514451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:06:06.410943  514451 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333 for IP: 192.168.76.2
	I0110 10:06:06.410968  514451 certs.go:195] generating shared ca certs ...
	I0110 10:06:06.410985  514451 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:06.411149  514451 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:06:06.411214  514451 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:06:06.411227  514451 certs.go:257] generating profile certs ...
	I0110 10:06:06.411282  514451 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.key
	I0110 10:06:06.411306  514451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.crt with IP's: []
	I0110 10:06:06.553101  514451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.crt ...
	I0110 10:06:06.553133  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.crt: {Name:mkcb216f7db669279d533220d13e3f66614c48f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:06.553398  514451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.key ...
	I0110 10:06:06.553413  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.key: {Name:mkf55122a00d95b7cbd5a1ff620e7a9e711531f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:06.553532  514451 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key.a4f0d3e0
	I0110 10:06:06.553551  514451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt.a4f0d3e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 10:06:06.910133  514451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt.a4f0d3e0 ...
	I0110 10:06:06.910169  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt.a4f0d3e0: {Name:mk0033c96fb97a31823756616bd4fa871b2d1aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:06.910360  514451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key.a4f0d3e0 ...
	I0110 10:06:06.910375  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key.a4f0d3e0: {Name:mk1c57b3ba3013ec0d56e1b159b9b6333fee64df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:06.910467  514451 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt.a4f0d3e0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt
	I0110 10:06:06.910545  514451 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key.a4f0d3e0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key
	I0110 10:06:06.910607  514451 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.key
	I0110 10:06:06.910626  514451 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.crt with IP's: []
	I0110 10:06:07.050957  514451 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.crt ...
	I0110 10:06:07.050989  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.crt: {Name:mk77b5fc9f5d09fd71fc147a596dc897963232a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:07.051214  514451 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.key ...
	I0110 10:06:07.051229  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.key: {Name:mk1c0af205e72c13e37f15f12880ed5172301bdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:07.051459  514451 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:06:07.051511  514451 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:06:07.051525  514451 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:06:07.051556  514451 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:06:07.051588  514451 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:06:07.051629  514451 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:06:07.051692  514451 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:06:07.052339  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:06:07.072466  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:06:07.090672  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:06:07.108248  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:06:07.126430  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 10:06:07.144107  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 10:06:07.161896  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:06:07.180778  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:06:07.205173  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:06:07.223417  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:06:07.241568  514451 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:06:07.259824  514451 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:06:07.273249  514451 ssh_runner.go:195] Run: openssl version
	I0110 10:06:07.279359  514451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:07.287127  514451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:06:07.294887  514451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:07.298698  514451 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:07.298764  514451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:07.340473  514451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:06:07.347823  514451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:06:07.354871  514451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:06:07.362224  514451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:06:07.369998  514451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:06:07.373839  514451 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:06:07.373908  514451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:06:07.416346  514451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:06:07.424357  514451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:06:07.432178  514451 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:06:07.439933  514451 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:06:07.447995  514451 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:06:07.452403  514451 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:06:07.452556  514451 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:06:07.494617  514451 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:06:07.502267  514451 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:06:07.510164  514451 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:06:07.514323  514451 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:06:07.514407  514451 kubeadm.go:401] StartCluster: {Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:06:07.514504  514451 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:06:07.514585  514451 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:06:07.544963  514451 cri.go:96] found id: ""
	I0110 10:06:07.545064  514451 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:06:07.552897  514451 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:06:07.561078  514451 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:06:07.561179  514451 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:06:07.569113  514451 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:06:07.569180  514451 kubeadm.go:158] found existing configuration files:
	
	I0110 10:06:07.569238  514451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:06:07.577039  514451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:06:07.577106  514451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:06:07.584600  514451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:06:07.594140  514451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:06:07.594266  514451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:06:07.603456  514451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:06:07.611631  514451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:06:07.611696  514451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:06:07.619519  514451 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:06:07.627722  514451 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:06:07.627826  514451 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:06:07.635235  514451 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:06:07.752087  514451 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:06:07.752607  514451 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:06:07.824893  514451 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:06:20.850655  514451 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:06:20.850714  514451 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:06:20.850813  514451 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:06:20.850875  514451 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:06:20.850918  514451 kubeadm.go:319] OS: Linux
	I0110 10:06:20.850986  514451 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:06:20.851060  514451 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:06:20.851109  514451 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:06:20.851158  514451 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:06:20.851216  514451 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:06:20.851265  514451 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:06:20.851314  514451 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:06:20.851375  514451 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:06:20.851444  514451 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:06:20.851522  514451 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:06:20.851621  514451 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:06:20.851733  514451 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:06:20.851814  514451 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:06:20.854700  514451 out.go:252]   - Generating certificates and keys ...
	I0110 10:06:20.854803  514451 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:06:20.854869  514451 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:06:20.854946  514451 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:06:20.855005  514451 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:06:20.855088  514451 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:06:20.855166  514451 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:06:20.855226  514451 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:06:20.855352  514451 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-219333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:06:20.855412  514451 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:06:20.855535  514451 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-219333 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:06:20.855605  514451 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:06:20.855672  514451 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:06:20.855719  514451 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:06:20.855777  514451 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:06:20.855831  514451 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:06:20.855891  514451 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:06:20.855952  514451 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:06:20.856019  514451 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:06:20.856078  514451 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:06:20.856162  514451 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:06:20.856230  514451 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:06:20.859301  514451 out.go:252]   - Booting up control plane ...
	I0110 10:06:20.859414  514451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:06:20.859497  514451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:06:20.859568  514451 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:06:20.859675  514451 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:06:20.859770  514451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:06:20.859877  514451 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:06:20.859963  514451 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:06:20.860005  514451 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:06:20.860139  514451 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:06:20.860247  514451 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:06:20.860309  514451 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000834919s
	I0110 10:06:20.860410  514451 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 10:06:20.860519  514451 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 10:06:20.860612  514451 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 10:06:20.860695  514451 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 10:06:20.860776  514451 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.012769118s
	I0110 10:06:20.860845  514451 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.781782425s
	I0110 10:06:20.860916  514451 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501732247s
	I0110 10:06:20.861041  514451 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 10:06:20.861170  514451 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 10:06:20.861240  514451 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 10:06:20.861430  514451 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-219333 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 10:06:20.861489  514451 kubeadm.go:319] [bootstrap-token] Using token: 2m48cc.9q95dfcnj85roeu5
	I0110 10:06:20.866517  514451 out.go:252]   - Configuring RBAC rules ...
	I0110 10:06:20.866664  514451 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 10:06:20.866789  514451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 10:06:20.866955  514451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 10:06:20.867089  514451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 10:06:20.867225  514451 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 10:06:20.867324  514451 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 10:06:20.867471  514451 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 10:06:20.867528  514451 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 10:06:20.867589  514451 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 10:06:20.867596  514451 kubeadm.go:319] 
	I0110 10:06:20.867664  514451 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 10:06:20.867674  514451 kubeadm.go:319] 
	I0110 10:06:20.867792  514451 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 10:06:20.867802  514451 kubeadm.go:319] 
	I0110 10:06:20.867832  514451 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 10:06:20.867917  514451 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 10:06:20.867992  514451 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 10:06:20.868002  514451 kubeadm.go:319] 
	I0110 10:06:20.868056  514451 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 10:06:20.868064  514451 kubeadm.go:319] 
	I0110 10:06:20.868111  514451 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 10:06:20.868117  514451 kubeadm.go:319] 
	I0110 10:06:20.868175  514451 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 10:06:20.868287  514451 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 10:06:20.868376  514451 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 10:06:20.868386  514451 kubeadm.go:319] 
	I0110 10:06:20.868685  514451 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 10:06:20.868790  514451 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 10:06:20.868804  514451 kubeadm.go:319] 
	I0110 10:06:20.868904  514451 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2m48cc.9q95dfcnj85roeu5 \
	I0110 10:06:20.869022  514451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e \
	I0110 10:06:20.869051  514451 kubeadm.go:319] 	--control-plane 
	I0110 10:06:20.869055  514451 kubeadm.go:319] 
	I0110 10:06:20.869162  514451 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 10:06:20.869173  514451 kubeadm.go:319] 
	I0110 10:06:20.869260  514451 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2m48cc.9q95dfcnj85roeu5 \
	I0110 10:06:20.869378  514451 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e 
	I0110 10:06:20.869397  514451 cni.go:84] Creating CNI manager for ""
	I0110 10:06:20.869405  514451 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:06:20.872569  514451 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 10:06:20.875510  514451 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 10:06:20.879425  514451 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 10:06:20.879465  514451 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 10:06:20.894068  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 10:06:21.195630  514451 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 10:06:21.195736  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:21.195856  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-219333 minikube.k8s.io/updated_at=2026_01_10T10_06_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=embed-certs-219333 minikube.k8s.io/primary=true
	I0110 10:06:24.511159  490351 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000197132s
	I0110 10:06:24.511511  490351 kubeadm.go:319] 
	I0110 10:06:24.511636  490351 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:06:24.511700  490351 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:06:24.512040  490351 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:06:24.512061  490351 kubeadm.go:319] 
	I0110 10:06:24.512559  490351 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:06:24.512619  490351 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:06:24.512672  490351 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:06:24.512682  490351 kubeadm.go:319] 
	I0110 10:06:24.521231  490351 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:06:24.521661  490351 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:06:24.521769  490351 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:06:24.522007  490351 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:06:24.522013  490351 kubeadm.go:319] 
	I0110 10:06:24.522081  490351 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:06:24.522133  490351 kubeadm.go:403] duration metric: took 8m6.162176446s to StartCluster
	I0110 10:06:24.522165  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 10:06:24.522225  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 10:06:24.563945  490351 cri.go:96] found id: ""
	I0110 10:06:24.563983  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.563992  490351 logs.go:284] No container was found matching "kube-apiserver"
	I0110 10:06:24.563998  490351 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 10:06:24.564069  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 10:06:24.593424  490351 cri.go:96] found id: ""
	I0110 10:06:24.593446  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.593455  490351 logs.go:284] No container was found matching "etcd"
	I0110 10:06:24.593461  490351 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 10:06:24.593518  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 10:06:24.624042  490351 cri.go:96] found id: ""
	I0110 10:06:24.624064  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.624073  490351 logs.go:284] No container was found matching "coredns"
	I0110 10:06:24.624078  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 10:06:24.624204  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 10:06:24.663067  490351 cri.go:96] found id: ""
	I0110 10:06:24.663091  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.663100  490351 logs.go:284] No container was found matching "kube-scheduler"
	I0110 10:06:24.663114  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 10:06:24.663175  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 10:06:24.696647  490351 cri.go:96] found id: ""
	I0110 10:06:24.696712  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.696737  490351 logs.go:284] No container was found matching "kube-proxy"
	I0110 10:06:24.696760  490351 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 10:06:24.696848  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 10:06:24.732584  490351 cri.go:96] found id: ""
	I0110 10:06:24.732607  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.732615  490351 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 10:06:24.732622  490351 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 10:06:24.732682  490351 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 10:06:24.761352  490351 cri.go:96] found id: ""
	I0110 10:06:24.761374  490351 logs.go:282] 0 containers: []
	W0110 10:06:24.761383  490351 logs.go:284] No container was found matching "kindnet"
	I0110 10:06:24.761392  490351 logs.go:123] Gathering logs for kubelet ...
	I0110 10:06:24.761404  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 10:06:24.835028  490351 logs.go:123] Gathering logs for dmesg ...
	I0110 10:06:24.835065  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 10:06:24.858035  490351 logs.go:123] Gathering logs for describe nodes ...
	I0110 10:06:24.858067  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 10:06:24.975206  490351 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 10:06:24.966612    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.967380    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969060    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969578    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.971230    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 10:06:24.966612    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.967380    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969060    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.969578    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:24.971230    4869 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 10:06:24.975227  490351 logs.go:123] Gathering logs for CRI-O ...
	I0110 10:06:24.975239  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0110 10:06:25.009714  490351 logs.go:123] Gathering logs for container status ...
	I0110 10:06:25.009760  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0110 10:06:25.051839  490351 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 10:06:25.051895  490351 out.go:285] * 
	W0110 10:06:25.051957  490351 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:06:25.051976  490351 out.go:285] * 
	W0110 10:06:25.052224  490351 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:06:25.057655  490351 out.go:203] 
	W0110 10:06:25.061372  490351 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000197132s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:06:25.061515  490351 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 10:06:25.061577  490351 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 10:06:25.064625  490351 out.go:203] 
	I0110 10:06:21.343539  514451 ops.go:34] apiserver oom_adj: -16
	I0110 10:06:21.343672  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:21.843841  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:22.344767  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:22.844031  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:23.343806  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:23.844425  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:24.344423  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:24.844160  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:25.344245  514451 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:06:25.561458  514451 kubeadm.go:1114] duration metric: took 4.365776845s to wait for elevateKubeSystemPrivileges
	I0110 10:06:25.561483  514451 kubeadm.go:403] duration metric: took 18.047080422s to StartCluster
	I0110 10:06:25.561500  514451 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:25.561563  514451 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:06:25.562487  514451 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:25.562687  514451 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:06:25.562832  514451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 10:06:25.562994  514451 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:06:25.563074  514451 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:06:25.563075  514451 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-219333"
	I0110 10:06:25.563090  514451 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-219333"
	I0110 10:06:25.563111  514451 addons.go:70] Setting default-storageclass=true in profile "embed-certs-219333"
	I0110 10:06:25.563115  514451 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:06:25.563122  514451 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-219333"
	I0110 10:06:25.563414  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:06:25.563582  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:06:25.567446  514451 out.go:179] * Verifying Kubernetes components...
	I0110 10:06:25.570560  514451 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:06:25.635851  514451 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:06:25.641851  514451 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:06:25.641875  514451 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:06:25.641955  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:25.642327  514451 addons.go:239] Setting addon default-storageclass=true in "embed-certs-219333"
	I0110 10:06:25.642361  514451 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:06:25.642788  514451 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:06:25.712661  514451 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:06:25.712682  514451 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:06:25.712745  514451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:06:25.760455  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:25.761060  514451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33439 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:06:26.110471  514451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:06:26.134142  514451 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:06:26.134445  514451 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 10:06:26.269473  514451 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	
	
	==> CRI-O <==
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.77752734Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.777694587Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.777815598Z" level=info msg="Create NRI interface"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.777995522Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.778016659Z" level=info msg="runtime interface created"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.778029878Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.778036705Z" level=info msg="runtime interface starting up..."
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.778043646Z" level=info msg="starting plugins..."
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.778059047Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 09:58:16 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:16.77813968Z" level=info msg="No systemd watchdog enabled"
	Jan 10 09:58:16 force-systemd-flag-524845 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.677754735Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=f060b55a-bc37-4eb0-a352-f11d5f4296e1 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.678969947Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=ee9ef3f7-0280-4c60-b51b-349dcdc19558 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.679412332Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=9d1759d8-7c58-4643-bdba-137e6c96095b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.679886217Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=d31d7566-cccc-464d-8bfc-ef1ced6b1460 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.680405814Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1b922066-b17c-4ce5-afae-4a91a326707e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.680967413Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d83cd5f7-2df3-4cf1-8bd2-63edc08cb07e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:58:18 force-systemd-flag-524845 crio[843]: time="2026-01-10T09:58:18.681422221Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=5e34e977-0447-4b2d-b986-0c5681eb4cba name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.794923533Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=b5e14091-2d0f-4f1c-be8a-c64d88e05930 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.795976519Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=710c8a83-ab37-4158-bd34-80a72e8c23b6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.79663695Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=a4e4b4ad-adfc-4595-920f-5cd395d88d08 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.797138954Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=92173870-1e78-4198-9cca-4db181965fdf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.797641525Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=b7e10cca-c377-4443-b9bf-43f374e2a8e0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.805031889Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=35073ff5-a5b3-4cae-ae27-ee23b7a6862c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:02:22 force-systemd-flag-524845 crio[843]: time="2026-01-10T10:02:22.805720497Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=9d9fccd3-2c2b-4f7e-a1e1-7598db5c4488 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 10:06:27.271300    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:27.271822    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:27.274071    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:27.274418    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:06:27.288003    4999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:06:27 up  2:48,  0 user,  load average: 1.67, 1.56, 1.87
	Linux force-systemd-flag-524845 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 10:06:24 force-systemd-flag-524845 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 10:06:25 force-systemd-flag-524845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 645.
	Jan 10 10:06:25 force-systemd-flag-524845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:06:25 force-systemd-flag-524845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:06:25 force-systemd-flag-524845 kubelet[4886]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:06:25 force-systemd-flag-524845 kubelet[4886]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:06:25 force-systemd-flag-524845 kubelet[4886]: E0110 10:06:25.529429    4886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 10:06:25 force-systemd-flag-524845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 10:06:25 force-systemd-flag-524845 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:06:26 force-systemd-flag-524845 kubelet[4913]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:06:26 force-systemd-flag-524845 kubelet[4913]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:06:26 force-systemd-flag-524845 kubelet[4913]: E0110 10:06:26.254281    4913 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:06:26 force-systemd-flag-524845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:06:27 force-systemd-flag-524845 kubelet[4972]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:06:27 force-systemd-flag-524845 kubelet[4972]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:06:27 force-systemd-flag-524845 kubelet[4972]: E0110 10:06:27.027820    4972 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 10:06:27 force-systemd-flag-524845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 10:06:27 force-systemd-flag-524845 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-524845 -n force-systemd-flag-524845
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-524845 -n force-systemd-flag-524845: exit status 6 (497.140438ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 10:06:27.956256  517350 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-524845" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-524845" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-524845" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-524845
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-524845: (2.246375056s)
--- FAIL: TestForceSystemdFlag (502.72s)

                                                
                                    
x
+
TestForceSystemdEnv (506.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-646877 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-646877 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m23.257361668s)

                                                
                                                
-- stdout --
	* [force-systemd-env-646877] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-646877" primary control-plane node in "force-systemd-env-646877" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:51:48.895168  471378 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:51:48.895422  471378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:51:48.895449  471378 out.go:374] Setting ErrFile to fd 2...
	I0110 09:51:48.895468  471378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:51:48.895789  471378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:51:48.896278  471378 out.go:368] Setting JSON to false
	I0110 09:51:48.897258  471378 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9258,"bootTime":1768029451,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:51:48.897366  471378 start.go:143] virtualization:  
	I0110 09:51:48.903584  471378 out.go:179] * [force-systemd-env-646877] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:51:48.907206  471378 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:51:48.907279  471378 notify.go:221] Checking for updates...
	I0110 09:51:48.914104  471378 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:51:48.917477  471378 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:51:48.921116  471378 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:51:48.924305  471378 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:51:48.927353  471378 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0110 09:51:48.930799  471378 config.go:182] Loaded profile config "test-preload-425904": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:51:48.930900  471378 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:51:48.982954  471378 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:51:48.983059  471378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:51:49.119080  471378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2026-01-10 09:51:49.102131489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:51:49.119180  471378 docker.go:319] overlay module found
	I0110 09:51:49.122502  471378 out.go:179] * Using the docker driver based on user configuration
	I0110 09:51:49.125550  471378 start.go:309] selected driver: docker
	I0110 09:51:49.125567  471378 start.go:928] validating driver "docker" against <nil>
	I0110 09:51:49.125588  471378 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:51:49.126256  471378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:51:49.217063  471378 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2026-01-10 09:51:49.204894697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:51:49.217227  471378 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:51:49.217436  471378 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:51:49.220435  471378 out.go:179] * Using Docker driver with root privileges
	I0110 09:51:49.223383  471378 cni.go:84] Creating CNI manager for ""
	I0110 09:51:49.223448  471378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:51:49.223461  471378 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:51:49.223545  471378 start.go:353] cluster config:
	{Name:force-systemd-env-646877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-646877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:51:49.226730  471378 out.go:179] * Starting "force-systemd-env-646877" primary control-plane node in "force-systemd-env-646877" cluster
	I0110 09:51:49.229709  471378 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:51:49.232618  471378 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:51:49.235501  471378 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:51:49.235547  471378 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:51:49.235557  471378 cache.go:65] Caching tarball of preloaded images
	I0110 09:51:49.235656  471378 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 09:51:49.235667  471378 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 09:51:49.235767  471378 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/config.json ...
	I0110 09:51:49.235783  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/config.json: {Name:mk811c4c1cb9530a14deece735ea7f70fdd043fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:51:49.235949  471378 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:51:49.257261  471378 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:51:49.257285  471378 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:51:49.257299  471378 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:51:49.257332  471378 start.go:360] acquireMachinesLock for force-systemd-env-646877: {Name:mk1dbec6c141241146721f987a9cfd8b6c007473 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:51:49.257439  471378 start.go:364] duration metric: took 87.279µs to acquireMachinesLock for "force-systemd-env-646877"
	I0110 09:51:49.257470  471378 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-646877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-646877 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:51:49.257535  471378 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:51:49.261693  471378 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 09:51:49.261916  471378 start.go:159] libmachine.API.Create for "force-systemd-env-646877" (driver="docker")
	I0110 09:51:49.261946  471378 client.go:173] LocalClient.Create starting
	I0110 09:51:49.262013  471378 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 09:51:49.262050  471378 main.go:144] libmachine: Decoding PEM data...
	I0110 09:51:49.262073  471378 main.go:144] libmachine: Parsing certificate...
	I0110 09:51:49.262125  471378 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 09:51:49.262151  471378 main.go:144] libmachine: Decoding PEM data...
	I0110 09:51:49.262166  471378 main.go:144] libmachine: Parsing certificate...
	I0110 09:51:49.262539  471378 cli_runner.go:164] Run: docker network inspect force-systemd-env-646877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:51:49.277174  471378 cli_runner.go:211] docker network inspect force-systemd-env-646877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:51:49.277255  471378 network_create.go:284] running [docker network inspect force-systemd-env-646877] to gather additional debugging logs...
	I0110 09:51:49.277272  471378 cli_runner.go:164] Run: docker network inspect force-systemd-env-646877
	W0110 09:51:49.291772  471378 cli_runner.go:211] docker network inspect force-systemd-env-646877 returned with exit code 1
	I0110 09:51:49.291799  471378 network_create.go:287] error running [docker network inspect force-systemd-env-646877]: docker network inspect force-systemd-env-646877: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-646877 not found
	I0110 09:51:49.291811  471378 network_create.go:289] output of [docker network inspect force-systemd-env-646877]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-646877 not found
	
	** /stderr **
	I0110 09:51:49.291900  471378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:51:49.316064  471378 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 09:51:49.316456  471378 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 09:51:49.316704  471378 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 09:51:49.317119  471378 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e6a30}
	I0110 09:51:49.317144  471378 network_create.go:124] attempt to create docker network force-systemd-env-646877 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 09:51:49.317203  471378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-646877 force-systemd-env-646877
	I0110 09:51:49.392276  471378 network_create.go:108] docker network force-systemd-env-646877 192.168.76.0/24 created
	I0110 09:51:49.392306  471378 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-646877" container
	I0110 09:51:49.392373  471378 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:51:49.413082  471378 cli_runner.go:164] Run: docker volume create force-systemd-env-646877 --label name.minikube.sigs.k8s.io=force-systemd-env-646877 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:51:49.435252  471378 oci.go:103] Successfully created a docker volume force-systemd-env-646877
	I0110 09:51:49.435338  471378 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-646877-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-646877 --entrypoint /usr/bin/test -v force-systemd-env-646877:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:51:50.080031  471378 oci.go:107] Successfully prepared a docker volume force-systemd-env-646877
	I0110 09:51:50.080113  471378 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:51:50.080128  471378 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:51:50.080200  471378 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-646877:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:51:54.356585  471378 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-646877:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.276330438s)
	I0110 09:51:54.356621  471378 kic.go:203] duration metric: took 4.276488963s to extract preloaded images to volume ...
	W0110 09:51:54.356751  471378 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:51:54.356867  471378 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:51:54.475148  471378 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-646877 --name force-systemd-env-646877 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-646877 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-646877 --network force-systemd-env-646877 --ip 192.168.76.2 --volume force-systemd-env-646877:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:51:54.827617  471378 cli_runner.go:164] Run: docker container inspect force-systemd-env-646877 --format={{.State.Running}}
	I0110 09:51:54.857407  471378 cli_runner.go:164] Run: docker container inspect force-systemd-env-646877 --format={{.State.Status}}
	I0110 09:51:54.899923  471378 cli_runner.go:164] Run: docker exec force-systemd-env-646877 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:51:54.961727  471378 oci.go:144] the created container "force-systemd-env-646877" has a running status.
	I0110 09:51:54.961772  471378 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa...
	I0110 09:51:55.140930  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 09:51:55.140978  471378 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:51:55.176088  471378 cli_runner.go:164] Run: docker container inspect force-systemd-env-646877 --format={{.State.Status}}
	I0110 09:51:55.203627  471378 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:51:55.203646  471378 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-646877 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:51:55.278027  471378 cli_runner.go:164] Run: docker container inspect force-systemd-env-646877 --format={{.State.Status}}
	I0110 09:51:55.304694  471378 machine.go:94] provisionDockerMachine start ...
	I0110 09:51:55.304795  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:51:55.348124  471378 main.go:144] libmachine: Using SSH client type: native
	I0110 09:51:55.348454  471378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33379 <nil> <nil>}
	I0110 09:51:55.348463  471378 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:51:55.349215  471378 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60518->127.0.0.1:33379: read: connection reset by peer
	I0110 09:51:58.501040  471378 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-646877
	
	I0110 09:51:58.501068  471378 ubuntu.go:182] provisioning hostname "force-systemd-env-646877"
	I0110 09:51:58.501151  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:51:58.519401  471378 main.go:144] libmachine: Using SSH client type: native
	I0110 09:51:58.519732  471378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33379 <nil> <nil>}
	I0110 09:51:58.519749  471378 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-646877 && echo "force-systemd-env-646877" | sudo tee /etc/hostname
	I0110 09:51:58.689260  471378 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-646877
	
	I0110 09:51:58.689429  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:51:58.716618  471378 main.go:144] libmachine: Using SSH client type: native
	I0110 09:51:58.716930  471378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33379 <nil> <nil>}
	I0110 09:51:58.716946  471378 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-646877' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-646877/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-646877' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:51:58.865966  471378 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:51:58.865988  471378 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 09:51:58.866024  471378 ubuntu.go:190] setting up certificates
	I0110 09:51:58.866034  471378 provision.go:84] configureAuth start
	I0110 09:51:58.866109  471378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-646877
	I0110 09:51:58.883030  471378 provision.go:143] copyHostCerts
	I0110 09:51:58.883069  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:51:58.883100  471378 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 09:51:58.883107  471378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:51:58.883183  471378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 09:51:58.883309  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:51:58.883327  471378 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 09:51:58.883332  471378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:51:58.883364  471378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 09:51:58.883410  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:51:58.883447  471378 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 09:51:58.883451  471378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:51:58.883475  471378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 09:51:58.883522  471378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-646877 san=[127.0.0.1 192.168.76.2 force-systemd-env-646877 localhost minikube]
	I0110 09:51:59.270062  471378 provision.go:177] copyRemoteCerts
	I0110 09:51:59.270171  471378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:51:59.270247  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:51:59.289378  471378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33379 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa Username:docker}
	I0110 09:51:59.392948  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 09:51:59.393013  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 09:51:59.417019  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 09:51:59.417090  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0110 09:51:59.438612  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 09:51:59.438726  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 09:51:59.457616  471378 provision.go:87] duration metric: took 591.558636ms to configureAuth
	I0110 09:51:59.457641  471378 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:51:59.457839  471378 config.go:182] Loaded profile config "force-systemd-env-646877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:51:59.457946  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:51:59.483395  471378 main.go:144] libmachine: Using SSH client type: native
	I0110 09:51:59.483706  471378 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33379 <nil> <nil>}
	I0110 09:51:59.483719  471378 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 09:51:59.939471  471378 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 09:51:59.939493  471378 machine.go:97] duration metric: took 4.634776942s to provisionDockerMachine
	I0110 09:51:59.939503  471378 client.go:176] duration metric: took 10.677547314s to LocalClient.Create
	I0110 09:51:59.939516  471378 start.go:167] duration metric: took 10.677601354s to libmachine.API.Create "force-systemd-env-646877"
	I0110 09:51:59.939524  471378 start.go:293] postStartSetup for "force-systemd-env-646877" (driver="docker")
	I0110 09:51:59.939535  471378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:51:59.939594  471378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:51:59.939631  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:51:59.981639  471378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33379 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa Username:docker}
	I0110 09:52:00.107291  471378 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:52:00.114825  471378 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:52:00.114913  471378 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:52:00.114942  471378 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 09:52:00.115039  471378 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 09:52:00.115168  471378 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 09:52:00.115197  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> /etc/ssl/certs/3098982.pem
	I0110 09:52:00.115379  471378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:52:00.129537  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:52:00.173578  471378 start.go:296] duration metric: took 234.037276ms for postStartSetup
	I0110 09:52:00.174082  471378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-646877
	I0110 09:52:00.206724  471378 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/config.json ...
	I0110 09:52:00.207071  471378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:52:00.207133  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:52:00.240612  471378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33379 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa Username:docker}
	I0110 09:52:00.378845  471378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:52:00.391207  471378 start.go:128] duration metric: took 11.13365674s to createHost
	I0110 09:52:00.391239  471378 start.go:83] releasing machines lock for "force-systemd-env-646877", held for 11.133784979s
	I0110 09:52:00.391335  471378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-646877
	I0110 09:52:00.423494  471378 ssh_runner.go:195] Run: cat /version.json
	I0110 09:52:00.423563  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:52:00.423563  471378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:52:00.423638  471378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-646877
	I0110 09:52:00.468256  471378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33379 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa Username:docker}
	I0110 09:52:00.469650  471378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33379 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-env-646877/id_rsa Username:docker}
	I0110 09:52:00.617077  471378 ssh_runner.go:195] Run: systemctl --version
	I0110 09:52:00.758425  471378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 09:52:00.830340  471378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:52:00.838084  471378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:52:00.838205  471378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:52:00.880343  471378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:52:00.880415  471378 start.go:496] detecting cgroup driver to use...
	I0110 09:52:00.880447  471378 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 09:52:00.880593  471378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 09:52:00.919275  471378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 09:52:00.938218  471378 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:52:00.938327  471378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:52:00.962806  471378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:52:00.988349  471378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:52:01.198898  471378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:52:01.447261  471378 docker.go:234] disabling docker service ...
	I0110 09:52:01.447350  471378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:52:01.481051  471378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:52:01.502473  471378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:52:01.713337  471378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:52:01.901346  471378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:52:01.918807  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:52:01.934353  471378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 09:52:01.934412  471378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:01.943596  471378 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 09:52:01.943672  471378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:01.953260  471378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:01.962348  471378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:01.971598  471378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:52:01.979907  471378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:01.988929  471378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:02.005212  471378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:52:02.017535  471378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:52:02.027353  471378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:52:02.036086  471378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:52:02.205148  471378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 09:52:02.465892  471378 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 09:52:02.466029  471378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 09:52:02.470765  471378 start.go:574] Will wait 60s for crictl version
	I0110 09:52:02.470890  471378 ssh_runner.go:195] Run: which crictl
	I0110 09:52:02.475110  471378 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:52:02.519758  471378 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 09:52:02.519931  471378 ssh_runner.go:195] Run: crio --version
	I0110 09:52:02.560453  471378 ssh_runner.go:195] Run: crio --version
	I0110 09:52:02.616596  471378 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 09:52:02.619864  471378 cli_runner.go:164] Run: docker network inspect force-systemd-env-646877 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:52:02.642588  471378 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 09:52:02.647106  471378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:52:02.658819  471378 kubeadm.go:884] updating cluster {Name:force-systemd-env-646877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-646877 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:52:02.658947  471378 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:52:02.659006  471378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:52:02.711661  471378 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:52:02.711687  471378 crio.go:433] Images already preloaded, skipping extraction
	I0110 09:52:02.711742  471378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:52:02.748960  471378 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:52:02.748983  471378 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:52:02.748992  471378 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 09:52:02.749078  471378 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-646877 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-646877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:52:02.749157  471378 ssh_runner.go:195] Run: crio config
	I0110 09:52:02.838783  471378 cni.go:84] Creating CNI manager for ""
	I0110 09:52:02.838804  471378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:52:02.838825  471378 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:52:02.838848  471378 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-646877 NodeName:force-systemd-env-646877 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:52:02.838976  471378 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-646877"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:52:02.839048  471378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:52:02.848453  471378 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:52:02.848561  471378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:52:02.855887  471378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0110 09:52:02.868917  471378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:52:02.881922  471378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I0110 09:52:02.895074  471378 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:52:02.898801  471378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:52:02.908358  471378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:52:03.085721  471378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:52:03.114209  471378 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877 for IP: 192.168.76.2
	I0110 09:52:03.114229  471378 certs.go:195] generating shared ca certs ...
	I0110 09:52:03.114245  471378 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:03.114388  471378 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 09:52:03.114444  471378 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 09:52:03.114457  471378 certs.go:257] generating profile certs ...
	I0110 09:52:03.114511  471378 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/client.key
	I0110 09:52:03.114533  471378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/client.crt with IP's: []
	I0110 09:52:03.445886  471378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/client.crt ...
	I0110 09:52:03.445927  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/client.crt: {Name:mk236d9f7a32aebb4b547b02050c27448af0aa42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:03.446128  471378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/client.key ...
	I0110 09:52:03.446145  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/client.key: {Name:mk8a05158ebe1bf29d2618113cd5026101a3ae79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:03.446247  471378 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key.1d1deedd
	I0110 09:52:03.446267  471378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt.1d1deedd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 09:52:03.831016  471378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt.1d1deedd ...
	I0110 09:52:03.831090  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt.1d1deedd: {Name:mk8003415a18e52c1f04c89a552d8393c73702d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:03.831329  471378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key.1d1deedd ...
	I0110 09:52:03.831369  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key.1d1deedd: {Name:mkdb3a90afe1b3d7623478a3e0c286418ab53197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:03.831507  471378 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt.1d1deedd -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt
	I0110 09:52:03.831629  471378 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key.1d1deedd -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key
	I0110 09:52:03.831717  471378 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.key
	I0110 09:52:03.831768  471378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.crt with IP's: []
	I0110 09:52:04.160431  471378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.crt ...
	I0110 09:52:04.160511  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.crt: {Name:mkb50b22989f057cc2e393244cd1889173ebd09f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:04.160749  471378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.key ...
	I0110 09:52:04.160789  471378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.key: {Name:mkf54b4d773d50b92697eab05bfea2bc3b0a46f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:52:04.160936  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 09:52:04.160980  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 09:52:04.161012  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 09:52:04.161058  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 09:52:04.161094  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 09:52:04.161139  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 09:52:04.161180  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 09:52:04.161221  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 09:52:04.161316  471378 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 09:52:04.161378  471378 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 09:52:04.161404  471378 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:52:04.161471  471378 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 09:52:04.161544  471378 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:52:04.161595  471378 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 09:52:04.161677  471378 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:52:04.161733  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> /usr/share/ca-certificates/3098982.pem
	I0110 09:52:04.161764  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:52:04.161807  471378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem -> /usr/share/ca-certificates/309898.pem
	I0110 09:52:04.162357  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:52:04.180460  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 09:52:04.198538  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:52:04.216658  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:52:04.234867  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 09:52:04.253615  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 09:52:04.281611  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:52:04.302989  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-env-646877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 09:52:04.321667  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 09:52:04.340759  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:52:04.363777  471378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 09:52:04.385535  471378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:52:04.400241  471378 ssh_runner.go:195] Run: openssl version
	I0110 09:52:04.407966  471378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 09:52:04.418304  471378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 09:52:04.426544  471378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 09:52:04.431049  471378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 09:52:04.431115  471378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 09:52:04.477241  471378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:52:04.485752  471378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 09:52:04.493724  471378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:52:04.501677  471378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:52:04.510494  471378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:52:04.514598  471378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:52:04.514669  471378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:52:04.557246  471378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:52:04.564852  471378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:52:04.572066  471378 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 09:52:04.582695  471378 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 09:52:04.591316  471378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 09:52:04.596955  471378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 09:52:04.597068  471378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 09:52:04.645605  471378 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:52:04.658708  471378 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 09:52:04.668219  471378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:52:04.672561  471378 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:52:04.672660  471378 kubeadm.go:401] StartCluster: {Name:force-systemd-env-646877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-646877 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:52:04.672761  471378 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:52:04.672889  471378 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:52:04.716149  471378 cri.go:96] found id: ""
	I0110 09:52:04.716272  471378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:52:04.728308  471378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:52:04.737310  471378 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:52:04.737445  471378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:52:04.746788  471378 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:52:04.746862  471378 kubeadm.go:158] found existing configuration files:
	
	I0110 09:52:04.746946  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:52:04.755996  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:52:04.756106  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:52:04.763763  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:52:04.771984  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:52:04.772095  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:52:04.780286  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:52:04.788896  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:52:04.788968  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:52:04.797071  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:52:04.806800  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:52:04.806912  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:52:04.819316  471378 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:52:04.863375  471378 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:52:04.864376  471378 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:52:04.965387  471378 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:52:04.965527  471378 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:52:04.965591  471378 kubeadm.go:319] OS: Linux
	I0110 09:52:04.965671  471378 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:52:04.965757  471378 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:52:04.965875  471378 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:52:04.965959  471378 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:52:04.966044  471378 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:52:04.966115  471378 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:52:04.966199  471378 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:52:04.966268  471378 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:52:04.966348  471378 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:52:05.049508  471378 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:52:05.049683  471378 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:52:05.049811  471378 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:52:05.057972  471378 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:52:05.063471  471378 out.go:252]   - Generating certificates and keys ...
	I0110 09:52:05.063647  471378 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:52:05.063768  471378 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:52:05.170895  471378 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 09:52:05.498798  471378 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 09:52:05.923468  471378 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 09:52:05.984766  471378 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 09:52:06.362386  471378 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 09:52:06.362757  471378 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-646877 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 09:52:06.505192  471378 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 09:52:06.505549  471378 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-646877 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 09:52:06.928933  471378 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 09:52:07.330507  471378 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 09:52:07.476653  471378 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 09:52:07.477002  471378 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:52:07.513927  471378 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:52:07.978207  471378 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:52:08.279862  471378 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:52:08.549507  471378 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:52:08.973755  471378 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:52:08.974312  471378 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:52:08.976946  471378 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:52:08.980563  471378 out.go:252]   - Booting up control plane ...
	I0110 09:52:08.980694  471378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:52:08.980837  471378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:52:08.980915  471378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:52:08.996750  471378 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:52:08.996868  471378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:52:09.005369  471378 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:52:09.005813  471378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:52:09.006068  471378 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:52:09.160325  471378 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:52:09.160452  471378 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 09:56:09.161054  471378 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000970458s
	I0110 09:56:09.161086  471378 kubeadm.go:319] 
	I0110 09:56:09.161141  471378 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 09:56:09.161178  471378 kubeadm.go:319] 	- The kubelet is not running
	I0110 09:56:09.161288  471378 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 09:56:09.161297  471378 kubeadm.go:319] 
	I0110 09:56:09.161406  471378 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 09:56:09.161440  471378 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 09:56:09.161473  471378 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 09:56:09.161479  471378 kubeadm.go:319] 
	I0110 09:56:09.166110  471378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:56:09.166525  471378 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:56:09.166656  471378 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:56:09.166951  471378 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 09:56:09.166965  471378 kubeadm.go:319] 
	W0110 09:56:09.167239  471378 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-646877 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-646877 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000970458s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-646877 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-646877 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000970458s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 09:56:09.167332  471378 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 09:56:09.167607  471378 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 09:56:09.580871  471378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:56:09.594449  471378 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:56:09.594512  471378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:56:09.602675  471378 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:56:09.602698  471378 kubeadm.go:158] found existing configuration files:
	
	I0110 09:56:09.602751  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:56:09.610652  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:56:09.610718  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:56:09.618142  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:56:09.625820  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:56:09.625906  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:56:09.633710  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:56:09.641380  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:56:09.641472  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:56:09.648875  471378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:56:09.656976  471378 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:56:09.657046  471378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:56:09.664408  471378 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:56:09.706211  471378 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 09:56:09.706466  471378 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 09:56:09.773474  471378 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 09:56:09.773584  471378 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 09:56:09.773679  471378 kubeadm.go:319] OS: Linux
	I0110 09:56:09.773737  471378 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 09:56:09.773794  471378 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 09:56:09.773869  471378 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 09:56:09.773925  471378 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 09:56:09.773973  471378 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 09:56:09.774052  471378 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 09:56:09.774115  471378 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 09:56:09.774168  471378 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 09:56:09.774220  471378 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 09:56:09.841752  471378 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 09:56:09.841865  471378 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 09:56:09.841952  471378 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 09:56:09.851044  471378 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 09:56:09.856307  471378 out.go:252]   - Generating certificates and keys ...
	I0110 09:56:09.856451  471378 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 09:56:09.856648  471378 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 09:56:09.856771  471378 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 09:56:09.856881  471378 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 09:56:09.856974  471378 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 09:56:09.857042  471378 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 09:56:09.857107  471378 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 09:56:09.857169  471378 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 09:56:09.857731  471378 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 09:56:09.858105  471378 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 09:56:09.858392  471378 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 09:56:09.858601  471378 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 09:56:09.944113  471378 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 09:56:10.587130  471378 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 09:56:10.880051  471378 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 09:56:11.299006  471378 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 09:56:11.428648  471378 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 09:56:11.429349  471378 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 09:56:11.432044  471378 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 09:56:11.435245  471378 out.go:252]   - Booting up control plane ...
	I0110 09:56:11.435349  471378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 09:56:11.435423  471378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 09:56:11.435486  471378 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 09:56:11.449865  471378 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 09:56:11.449981  471378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 09:56:11.460181  471378 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 09:56:11.460283  471378 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 09:56:11.460326  471378 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 09:56:11.598471  471378 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 09:56:11.598597  471378 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:00:11.598932  471378 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000851101s
	I0110 10:00:11.598967  471378 kubeadm.go:319] 
	I0110 10:00:11.599021  471378 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:00:11.599058  471378 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:00:11.599164  471378 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:00:11.599173  471378 kubeadm.go:319] 
	I0110 10:00:11.599271  471378 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:00:11.599304  471378 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:00:11.599334  471378 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:00:11.599338  471378 kubeadm.go:319] 
	I0110 10:00:11.604246  471378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:00:11.604717  471378 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:00:11.604834  471378 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:00:11.605063  471378 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:00:11.605074  471378 kubeadm.go:319] 
	I0110 10:00:11.605139  471378 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:00:11.605203  471378 kubeadm.go:403] duration metric: took 8m6.932547357s to StartCluster
	I0110 10:00:11.605241  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 10:00:11.605307  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 10:00:11.631229  471378 cri.go:96] found id: ""
	I0110 10:00:11.631276  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.631286  471378 logs.go:284] No container was found matching "kube-apiserver"
	I0110 10:00:11.631294  471378 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 10:00:11.631368  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 10:00:11.657641  471378 cri.go:96] found id: ""
	I0110 10:00:11.657669  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.657678  471378 logs.go:284] No container was found matching "etcd"
	I0110 10:00:11.657685  471378 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 10:00:11.657747  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 10:00:11.683821  471378 cri.go:96] found id: ""
	I0110 10:00:11.683847  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.683857  471378 logs.go:284] No container was found matching "coredns"
	I0110 10:00:11.683864  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 10:00:11.683922  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 10:00:11.709398  471378 cri.go:96] found id: ""
	I0110 10:00:11.709424  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.709438  471378 logs.go:284] No container was found matching "kube-scheduler"
	I0110 10:00:11.709445  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 10:00:11.709502  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 10:00:11.739817  471378 cri.go:96] found id: ""
	I0110 10:00:11.739839  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.739848  471378 logs.go:284] No container was found matching "kube-proxy"
	I0110 10:00:11.739856  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 10:00:11.739913  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 10:00:11.768653  471378 cri.go:96] found id: ""
	I0110 10:00:11.768678  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.768686  471378 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 10:00:11.768694  471378 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 10:00:11.768752  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 10:00:11.795141  471378 cri.go:96] found id: ""
	I0110 10:00:11.795167  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.795176  471378 logs.go:284] No container was found matching "kindnet"
	I0110 10:00:11.795186  471378 logs.go:123] Gathering logs for CRI-O ...
	I0110 10:00:11.795199  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0110 10:00:11.828287  471378 logs.go:123] Gathering logs for container status ...
	I0110 10:00:11.828320  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 10:00:11.858180  471378 logs.go:123] Gathering logs for kubelet ...
	I0110 10:00:11.858205  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 10:00:11.943321  471378 logs.go:123] Gathering logs for dmesg ...
	I0110 10:00:11.943402  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 10:00:11.964176  471378 logs.go:123] Gathering logs for describe nodes ...
	I0110 10:00:11.964256  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 10:00:12.049691  471378 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 10:00:12.041284    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.041791    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043476    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043910    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.045588    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 10:00:12.041284    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.041791    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043476    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043910    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.045588    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0110 10:00:12.049722  471378 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 10:00:12.049758  471378 out.go:285] * 
	* 
	W0110 10:00:12.049816  471378 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:00:12.049832  471378 out.go:285] * 
	* 
	W0110 10:00:12.050082  471378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:00:12.055837  471378 out.go:203] 
	W0110 10:00:12.058848  471378 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:00:12.058906  471378 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 10:00:12.058934  471378 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 10:00:12.062018  471378 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-646877 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-10 10:00:12.120436575 +0000 UTC m=+2833.346770824
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-646877
helpers_test.go:244: (dbg) docker inspect force-systemd-env-646877:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3",
	        "Created": "2026-01-10T09:51:54.491721567Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T09:51:54.564015753Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3/hosts",
	        "LogPath": "/var/lib/docker/containers/6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3/6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3-json.log",
	        "Name": "/force-systemd-env-646877",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-646877:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-646877",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c854bf78d16f2a1e7fb9f28c63fc1e5752ca019e63d2fd23197e79bcb8d82f3",
	                "LowerDir": "/var/lib/docker/overlay2/6e6b8d8d5219ce8a695d7eafc7c746483dca7c0eb1590567a5e5ed886319aa33-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e6b8d8d5219ce8a695d7eafc7c746483dca7c0eb1590567a5e5ed886319aa33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e6b8d8d5219ce8a695d7eafc7c746483dca7c0eb1590567a5e5ed886319aa33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e6b8d8d5219ce8a695d7eafc7c746483dca7c0eb1590567a5e5ed886319aa33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-646877",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-646877/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-646877",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-646877",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-646877",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64fc28ce1be1911c05ebdd76b878b8c4f9e17c7a9d75cbfcf68cb4c1eeb67195",
	            "SandboxKey": "/var/run/docker/netns/64fc28ce1be1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33379"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33380"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33381"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33382"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-646877": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:86:03:26:9a:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c97ab4c75741b63f02a5f01beeb8f3d2fa4aaef7ae8efb08930aa8c9cc9686ad",
	                    "EndpointID": "513ae4307c61cb06da287fa3977c9880f0fa88018f8a09fb1eb11791789df65f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-646877",
	                        "6c854bf78d16"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-646877 -n force-systemd-env-646877
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-646877 -n force-systemd-env-646877: exit status 6 (318.96784ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 10:00:12.457398  494023 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-646877" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-646877 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-255897 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat docker --no-pager                                                                       │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/docker/daemon.json                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo docker system info                                                                                    │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cri-dockerd --version                                                                                 │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat containerd --no-pager                                                                   │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/containerd/config.toml                                                                       │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo containerd config dump                                                                                │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat crio --no-pager                                                                         │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo crio config                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ delete  │ -p cilium-255897                                                                                                            │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                   │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:58:07
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:58:07.553679  490351 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:58:07.553848  490351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:58:07.553862  490351 out.go:374] Setting ErrFile to fd 2...
	I0110 09:58:07.553868  490351 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:58:07.554176  490351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:58:07.554702  490351 out.go:368] Setting JSON to false
	I0110 09:58:07.555848  490351 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9637,"bootTime":1768029451,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:58:07.555930  490351 start.go:143] virtualization:  
	I0110 09:58:07.559535  490351 out.go:179] * [force-systemd-flag-524845] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:58:07.563995  490351 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:58:07.564051  490351 notify.go:221] Checking for updates...
	I0110 09:58:07.570504  490351 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:58:07.573594  490351 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:58:07.576781  490351 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:58:07.579932  490351 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:58:07.582938  490351 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:58:07.591723  490351 config.go:182] Loaded profile config "force-systemd-env-646877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:58:07.591896  490351 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:58:07.616596  490351 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:58:07.616787  490351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:58:07.678314  490351 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:58:07.668051789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:58:07.678429  490351 docker.go:319] overlay module found
	I0110 09:58:07.681014  490351 out.go:179] * Using the docker driver based on user configuration
	I0110 09:58:07.683322  490351 start.go:309] selected driver: docker
	I0110 09:58:07.683343  490351 start.go:928] validating driver "docker" against <nil>
	I0110 09:58:07.683358  490351 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:58:07.684110  490351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:58:07.736389  490351 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:58:07.726925717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:58:07.736620  490351 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:58:07.736842  490351 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:58:07.739329  490351 out.go:179] * Using Docker driver with root privileges
	I0110 09:58:07.741743  490351 cni.go:84] Creating CNI manager for ""
	I0110 09:58:07.741819  490351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:58:07.741839  490351 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:58:07.741921  490351 start.go:353] cluster config:
	{Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:58:07.744819  490351 out.go:179] * Starting "force-systemd-flag-524845" primary control-plane node in "force-systemd-flag-524845" cluster
	I0110 09:58:07.747245  490351 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:58:07.749876  490351 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:58:07.752571  490351 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:58:07.752619  490351 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:58:07.752630  490351 cache.go:65] Caching tarball of preloaded images
	I0110 09:58:07.752657  490351 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:58:07.752727  490351 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 09:58:07.752738  490351 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 09:58:07.752837  490351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/config.json ...
	I0110 09:58:07.752855  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/config.json: {Name:mkc575e6211f64f692579bcfde7f5500b6e9ddb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:07.778196  490351 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:58:07.778220  490351 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:58:07.778237  490351 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:58:07.778269  490351 start.go:360] acquireMachinesLock for force-systemd-flag-524845: {Name:mkd6a15301a8cdc65884d926e54f9d5744e40d28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:58:07.778383  490351 start.go:364] duration metric: took 93.573µs to acquireMachinesLock for "force-systemd-flag-524845"
	I0110 09:58:07.778415  490351 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:58:07.778480  490351 start.go:125] createHost starting for "" (driver="docker")
	I0110 09:58:07.781858  490351 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 09:58:07.782098  490351 start.go:159] libmachine.API.Create for "force-systemd-flag-524845" (driver="docker")
	I0110 09:58:07.782134  490351 client.go:173] LocalClient.Create starting
	I0110 09:58:07.782206  490351 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 09:58:07.782245  490351 main.go:144] libmachine: Decoding PEM data...
	I0110 09:58:07.782264  490351 main.go:144] libmachine: Parsing certificate...
	I0110 09:58:07.782329  490351 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 09:58:07.782351  490351 main.go:144] libmachine: Decoding PEM data...
	I0110 09:58:07.782362  490351 main.go:144] libmachine: Parsing certificate...
	I0110 09:58:07.782738  490351 cli_runner.go:164] Run: docker network inspect force-systemd-flag-524845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 09:58:07.799057  490351 cli_runner.go:211] docker network inspect force-systemd-flag-524845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 09:58:07.799138  490351 network_create.go:284] running [docker network inspect force-systemd-flag-524845] to gather additional debugging logs...
	I0110 09:58:07.799162  490351 cli_runner.go:164] Run: docker network inspect force-systemd-flag-524845
	W0110 09:58:07.815154  490351 cli_runner.go:211] docker network inspect force-systemd-flag-524845 returned with exit code 1
	I0110 09:58:07.815185  490351 network_create.go:287] error running [docker network inspect force-systemd-flag-524845]: docker network inspect force-systemd-flag-524845: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-524845 not found
	I0110 09:58:07.815205  490351 network_create.go:289] output of [docker network inspect force-systemd-flag-524845]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-524845 not found
	
	** /stderr **
	I0110 09:58:07.815297  490351 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:58:07.832656  490351 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 09:58:07.833146  490351 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 09:58:07.833394  490351 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 09:58:07.833681  490351 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c97ab4c75741 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:39:bd:f9:f1:fc} reservation:<nil>}
	I0110 09:58:07.834131  490351 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9c10}
	I0110 09:58:07.834152  490351 network_create.go:124] attempt to create docker network force-systemd-flag-524845 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 09:58:07.834221  490351 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-524845 force-systemd-flag-524845
	I0110 09:58:07.892173  490351 network_create.go:108] docker network force-systemd-flag-524845 192.168.85.0/24 created
	I0110 09:58:07.892207  490351 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-524845" container
	I0110 09:58:07.892281  490351 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 09:58:07.933086  490351 cli_runner.go:164] Run: docker volume create force-systemd-flag-524845 --label name.minikube.sigs.k8s.io=force-systemd-flag-524845 --label created_by.minikube.sigs.k8s.io=true
	I0110 09:58:07.959040  490351 oci.go:103] Successfully created a docker volume force-systemd-flag-524845
	I0110 09:58:07.959128  490351 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-524845-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-524845 --entrypoint /usr/bin/test -v force-systemd-flag-524845:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 09:58:08.499808  490351 oci.go:107] Successfully prepared a docker volume force-systemd-flag-524845
	I0110 09:58:08.499893  490351 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:58:08.499911  490351 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 09:58:08.500007  490351 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-524845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 09:58:12.385731  490351 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-524845:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.885683498s)
	I0110 09:58:12.385763  490351 kic.go:203] duration metric: took 3.885849941s to extract preloaded images to volume ...
	W0110 09:58:12.385889  490351 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 09:58:12.386026  490351 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 09:58:12.467213  490351 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-524845 --name force-systemd-flag-524845 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-524845 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-524845 --network force-systemd-flag-524845 --ip 192.168.85.2 --volume force-systemd-flag-524845:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 09:58:12.784803  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Running}}
	I0110 09:58:12.803015  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Status}}
	I0110 09:58:12.824900  490351 cli_runner.go:164] Run: docker exec force-systemd-flag-524845 stat /var/lib/dpkg/alternatives/iptables
	I0110 09:58:12.886539  490351 oci.go:144] the created container "force-systemd-flag-524845" has a running status.
	I0110 09:58:12.886567  490351 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa...
	I0110 09:58:13.322259  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 09:58:13.322346  490351 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 09:58:13.360073  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Status}}
	I0110 09:58:13.386765  490351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 09:58:13.386784  490351 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-524845 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 09:58:13.446974  490351 cli_runner.go:164] Run: docker container inspect force-systemd-flag-524845 --format={{.State.Status}}
	I0110 09:58:13.470135  490351 machine.go:94] provisionDockerMachine start ...
	I0110 09:58:13.470238  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:13.497722  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:13.498048  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:13.498058  490351 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:58:13.725498  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-524845
	
	I0110 09:58:13.725565  490351 ubuntu.go:182] provisioning hostname "force-systemd-flag-524845"
	I0110 09:58:13.725660  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:13.744097  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:13.744429  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:13.744440  490351 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-524845 && echo "force-systemd-flag-524845" | sudo tee /etc/hostname
	I0110 09:58:13.909634  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-524845
	
	I0110 09:58:13.909728  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:13.932967  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:13.933272  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:13.933293  490351 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-524845' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-524845/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-524845' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:58:14.108892  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:58:14.108916  490351 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 09:58:14.108934  490351 ubuntu.go:190] setting up certificates
	I0110 09:58:14.108953  490351 provision.go:84] configureAuth start
	I0110 09:58:14.109014  490351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-524845
	I0110 09:58:14.125630  490351 provision.go:143] copyHostCerts
	I0110 09:58:14.125675  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:58:14.125712  490351 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 09:58:14.125724  490351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:58:14.125802  490351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 09:58:14.125885  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:58:14.125912  490351 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 09:58:14.125920  490351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:58:14.125948  490351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 09:58:14.125992  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:58:14.126012  490351 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 09:58:14.126022  490351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:58:14.126049  490351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 09:58:14.126106  490351 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-524845 san=[127.0.0.1 192.168.85.2 force-systemd-flag-524845 localhost minikube]
	I0110 09:58:14.560742  490351 provision.go:177] copyRemoteCerts
	I0110 09:58:14.560814  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:58:14.560859  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:14.578654  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:14.685851  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 09:58:14.685911  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 09:58:14.707728  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 09:58:14.707788  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 09:58:14.731525  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 09:58:14.731597  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 09:58:14.749546  490351 provision.go:87] duration metric: took 640.570546ms to configureAuth
	I0110 09:58:14.749575  490351 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:58:14.749758  490351 config.go:182] Loaded profile config "force-systemd-flag-524845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:58:14.749869  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:14.767063  490351 main.go:144] libmachine: Using SSH client type: native
	I0110 09:58:14.767380  490351 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I0110 09:58:14.767402  490351 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 09:58:15.105106  490351 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 09:58:15.105131  490351 machine.go:97] duration metric: took 1.634977324s to provisionDockerMachine
	I0110 09:58:15.105143  490351 client.go:176] duration metric: took 7.32300228s to LocalClient.Create
	I0110 09:58:15.105154  490351 start.go:167] duration metric: took 7.323057576s to libmachine.API.Create "force-systemd-flag-524845"
	I0110 09:58:15.105162  490351 start.go:293] postStartSetup for "force-systemd-flag-524845" (driver="docker")
	I0110 09:58:15.105174  490351 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:58:15.105245  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:58:15.105292  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.123800  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.228404  490351 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:58:15.231553  490351 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:58:15.231581  490351 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:58:15.231593  490351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 09:58:15.231648  490351 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 09:58:15.231736  490351 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 09:58:15.231748  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> /etc/ssl/certs/3098982.pem
	I0110 09:58:15.231858  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:58:15.238992  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:58:15.256546  490351 start.go:296] duration metric: took 151.368464ms for postStartSetup
	I0110 09:58:15.256951  490351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-524845
	I0110 09:58:15.273760  490351 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/config.json ...
	I0110 09:58:15.274042  490351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:58:15.274095  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.289696  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.389886  490351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:58:15.394911  490351 start.go:128] duration metric: took 7.616415969s to createHost
	I0110 09:58:15.394940  490351 start.go:83] releasing machines lock for "force-systemd-flag-524845", held for 7.616542058s
	I0110 09:58:15.395015  490351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-524845
	I0110 09:58:15.420582  490351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:58:15.420672  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.420827  490351 ssh_runner.go:195] Run: cat /version.json
	I0110 09:58:15.420859  490351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-524845
	I0110 09:58:15.453334  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.453867  490351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/force-systemd-flag-524845/id_rsa Username:docker}
	I0110 09:58:15.676978  490351 ssh_runner.go:195] Run: systemctl --version
	I0110 09:58:15.683523  490351 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 09:58:15.722378  490351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:58:15.727020  490351 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:58:15.727146  490351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:58:15.756451  490351 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 09:58:15.756482  490351 start.go:496] detecting cgroup driver to use...
	I0110 09:58:15.756530  490351 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 09:58:15.756623  490351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 09:58:15.775531  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 09:58:15.788613  490351 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:58:15.788677  490351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:58:15.805159  490351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:58:15.824052  490351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:58:15.948607  490351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:58:16.076637  490351 docker.go:234] disabling docker service ...
	I0110 09:58:16.076764  490351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:58:16.099692  490351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:58:16.113980  490351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:58:16.245873  490351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:58:16.360721  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:58:16.374259  490351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:58:16.388020  490351 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 09:58:16.388089  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.397382  490351 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 09:58:16.397461  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.406597  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.415927  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.425226  490351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:58:16.434195  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.443064  490351 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.456142  490351 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:58:16.465329  490351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:58:16.473497  490351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:58:16.481003  490351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:58:16.606702  490351 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 09:58:16.783719  490351 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 09:58:16.783789  490351 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 09:58:16.787356  490351 start.go:574] Will wait 60s for crictl version
	I0110 09:58:16.787417  490351 ssh_runner.go:195] Run: which crictl
	I0110 09:58:16.790756  490351 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:58:16.817454  490351 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 09:58:16.817566  490351 ssh_runner.go:195] Run: crio --version
	I0110 09:58:16.847420  490351 ssh_runner.go:195] Run: crio --version
	I0110 09:58:16.881854  490351 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 09:58:16.884738  490351 cli_runner.go:164] Run: docker network inspect force-systemd-flag-524845 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:58:16.905702  490351 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 09:58:16.910503  490351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:58:16.924072  490351 kubeadm.go:884] updating cluster {Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:58:16.924190  490351 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:58:16.924255  490351 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:58:16.968317  490351 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:58:16.968344  490351 crio.go:433] Images already preloaded, skipping extraction
	I0110 09:58:16.968414  490351 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:58:16.994323  490351 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:58:16.994345  490351 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:58:16.994354  490351 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 09:58:16.994443  490351 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-524845 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:58:16.994529  490351 ssh_runner.go:195] Run: crio config
	I0110 09:58:17.056576  490351 cni.go:84] Creating CNI manager for ""
	I0110 09:58:17.056647  490351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:58:17.056682  490351 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:58:17.056738  490351 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-524845 NodeName:force-systemd-flag-524845 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:58:17.056900  490351 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-524845"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:58:17.057007  490351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:58:17.064554  490351 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:58:17.064628  490351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:58:17.071927  490351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0110 09:58:17.084201  490351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:58:17.097109  490351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0110 09:58:17.110390  490351 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:58:17.114022  490351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 09:58:17.123933  490351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:58:17.230956  490351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:58:17.246838  490351 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845 for IP: 192.168.85.2
	I0110 09:58:17.246859  490351 certs.go:195] generating shared ca certs ...
	I0110 09:58:17.246875  490351 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.247059  490351 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 09:58:17.247123  490351 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 09:58:17.247139  490351 certs.go:257] generating profile certs ...
	I0110 09:58:17.247216  490351 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.key
	I0110 09:58:17.247252  490351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.crt with IP's: []
	I0110 09:58:17.425377  490351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.crt ...
	I0110 09:58:17.425412  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.crt: {Name:mk518a35dd190d1c13e274a186c46aac0b65c0e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.425662  490351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.key ...
	I0110 09:58:17.425681  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/client.key: {Name:mkb14258b269a57e40590de8cc644162f2c9e79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.425801  490351 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff
	I0110 09:58:17.425823  490351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 09:58:17.564816  490351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff ...
	I0110 09:58:17.564845  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff: {Name:mka8f91ace66a4d1d3ed424ff0e0eec71041a342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.565034  490351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff ...
	I0110 09:58:17.565047  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff: {Name:mk6b93fc5491976a4f4cf76c3f017ba1495719dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.565137  490351 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt.d87016ff -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt
	I0110 09:58:17.565217  490351 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key.d87016ff -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key
	I0110 09:58:17.565282  490351 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key
	I0110 09:58:17.565301  490351 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt with IP's: []
	I0110 09:58:17.900229  490351 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt ...
	I0110 09:58:17.900259  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt: {Name:mk45acd75c334a72bca2f45577d944d855bffc29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.900443  490351 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key ...
	I0110 09:58:17.900460  490351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key: {Name:mk53245ff56aaacc920494468e348c6e8626f813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:58:17.900580  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 09:58:17.900604  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 09:58:17.900616  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 09:58:17.900633  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 09:58:17.900645  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 09:58:17.900661  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 09:58:17.900676  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 09:58:17.900692  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 09:58:17.900740  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 09:58:17.900784  490351 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 09:58:17.900793  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:58:17.900819  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 09:58:17.900848  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:58:17.900877  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 09:58:17.900929  490351 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:58:17.900964  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem -> /usr/share/ca-certificates/309898.pem
	I0110 09:58:17.900980  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> /usr/share/ca-certificates/3098982.pem
	I0110 09:58:17.900997  490351 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:17.901579  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:58:17.919395  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 09:58:17.937982  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:58:17.956115  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:58:17.973123  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 09:58:17.990988  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 09:58:18.009880  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:58:18.029805  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/force-systemd-flag-524845/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 09:58:18.048374  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 09:58:18.067165  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 09:58:18.084766  490351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:58:18.103156  490351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:58:18.116152  490351 ssh_runner.go:195] Run: openssl version
	I0110 09:58:18.122650  490351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.130503  490351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 09:58:18.137994  490351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.141787  490351 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.141854  490351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 09:58:18.183241  490351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:58:18.191230  490351 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 09:58:18.199053  490351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.206896  490351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 09:58:18.214984  490351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.219584  490351 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.219728  490351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 09:58:18.265971  490351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:58:18.273397  490351 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 09:58:18.280861  490351 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.288075  490351 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:58:18.295540  490351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.299175  490351 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.299242  490351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:58:18.340759  490351 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:58:18.348469  490351 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 09:58:18.356224  490351 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:58:18.359904  490351 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 09:58:18.359962  490351 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-524845 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-524845 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:58:18.360049  490351 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:58:18.360114  490351 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:58:18.387471  490351 cri.go:96] found id: ""
	I0110 09:58:18.387553  490351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:58:18.395437  490351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 09:58:18.404156  490351 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 09:58:18.404231  490351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 09:58:18.416048  490351 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 09:58:18.416070  490351 kubeadm.go:158] found existing configuration files:
	
	I0110 09:58:18.416123  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 09:58:18.425704  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 09:58:18.425773  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 09:58:18.433986  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 09:58:18.442886  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 09:58:18.442959  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 09:58:18.450880  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 09:58:18.459668  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 09:58:18.459733  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 09:58:18.468048  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 09:58:18.476924  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 09:58:18.477005  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 09:58:18.484485  490351 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 09:58:18.607969  490351 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:58:18.608397  490351 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 09:58:18.674428  490351 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:00:11.598932  471378 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000851101s
	I0110 10:00:11.598967  471378 kubeadm.go:319] 
	I0110 10:00:11.599021  471378 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:00:11.599058  471378 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:00:11.599164  471378 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:00:11.599173  471378 kubeadm.go:319] 
	I0110 10:00:11.599271  471378 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:00:11.599304  471378 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:00:11.599334  471378 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:00:11.599338  471378 kubeadm.go:319] 
	I0110 10:00:11.604246  471378 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:00:11.604717  471378 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:00:11.604834  471378 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:00:11.605063  471378 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:00:11.605074  471378 kubeadm.go:319] 
	I0110 10:00:11.605139  471378 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:00:11.605203  471378 kubeadm.go:403] duration metric: took 8m6.932547357s to StartCluster
	I0110 10:00:11.605241  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 10:00:11.605307  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 10:00:11.631229  471378 cri.go:96] found id: ""
	I0110 10:00:11.631276  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.631286  471378 logs.go:284] No container was found matching "kube-apiserver"
	I0110 10:00:11.631294  471378 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 10:00:11.631368  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 10:00:11.657641  471378 cri.go:96] found id: ""
	I0110 10:00:11.657669  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.657678  471378 logs.go:284] No container was found matching "etcd"
	I0110 10:00:11.657685  471378 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 10:00:11.657747  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 10:00:11.683821  471378 cri.go:96] found id: ""
	I0110 10:00:11.683847  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.683857  471378 logs.go:284] No container was found matching "coredns"
	I0110 10:00:11.683864  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 10:00:11.683922  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 10:00:11.709398  471378 cri.go:96] found id: ""
	I0110 10:00:11.709424  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.709438  471378 logs.go:284] No container was found matching "kube-scheduler"
	I0110 10:00:11.709445  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 10:00:11.709502  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 10:00:11.739817  471378 cri.go:96] found id: ""
	I0110 10:00:11.739839  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.739848  471378 logs.go:284] No container was found matching "kube-proxy"
	I0110 10:00:11.739856  471378 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 10:00:11.739913  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 10:00:11.768653  471378 cri.go:96] found id: ""
	I0110 10:00:11.768678  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.768686  471378 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 10:00:11.768694  471378 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 10:00:11.768752  471378 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 10:00:11.795141  471378 cri.go:96] found id: ""
	I0110 10:00:11.795167  471378 logs.go:282] 0 containers: []
	W0110 10:00:11.795176  471378 logs.go:284] No container was found matching "kindnet"
	I0110 10:00:11.795186  471378 logs.go:123] Gathering logs for CRI-O ...
	I0110 10:00:11.795199  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0110 10:00:11.828287  471378 logs.go:123] Gathering logs for container status ...
	I0110 10:00:11.828320  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 10:00:11.858180  471378 logs.go:123] Gathering logs for kubelet ...
	I0110 10:00:11.858205  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 10:00:11.943321  471378 logs.go:123] Gathering logs for dmesg ...
	I0110 10:00:11.943402  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 10:00:11.964176  471378 logs.go:123] Gathering logs for describe nodes ...
	I0110 10:00:11.964256  471378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 10:00:12.049691  471378 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 10:00:12.041284    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.041791    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043476    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043910    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.045588    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 10:00:12.041284    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.041791    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043476    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.043910    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:12.045588    4940 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0110 10:00:12.049722  471378 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 10:00:12.049758  471378 out.go:285] * 
	W0110 10:00:12.049816  471378 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:00:12.049832  471378 out.go:285] * 
	W0110 10:00:12.050082  471378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:00:12.055837  471378 out.go:203] 
	W0110 10:00:12.058848  471378 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000851101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 10:00:12.058906  471378 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 10:00:12.058934  471378 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 10:00:12.062018  471378 out.go:203] 
	
	
	==> CRI-O <==
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458060322Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458104196Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458165136Z" level=info msg="Create NRI interface"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458315292Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458331383Z" level=info msg="runtime interface created"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458345126Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458352995Z" level=info msg="runtime interface starting up..."
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458363367Z" level=info msg="starting plugins..."
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458375658Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 09:52:02 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:02.458448603Z" level=info msg="No systemd watchdog enabled"
	Jan 10 09:52:02 force-systemd-env-646877 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.053787455Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=a0081bfe-c774-4143-887c-5619d6d8e91c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.054518378Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=5f474b82-6293-4f89-bad0-02717287f130 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.055106382Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=157c3ae4-6867-45b8-8702-92778ebdeda0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.055615369Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=99fafb84-ff09-4c63-97ae-d5aa7876dece name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.05606427Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=793160c9-2a7b-4dbc-8f0a-0400fd536610 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.056454413Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5133a0c7-bdb0-494a-b15a-21b8c1c1d863 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:52:05 force-systemd-env-646877 crio[840]: time="2026-01-10T09:52:05.056980721Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=bd26c6a6-7783-4b5d-8434-7c1e5c1b6c60 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.845287131Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=6777c357-8d76-49ce-9ff0-3ff343aea580 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.846153112Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=d5a281e1-7e57-46ac-b957-c2122f15f7aa name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.846744586Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=59ef1afd-73ef-46a7-8649-ee9acd03b4f5 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.847487185Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=6671160a-0420-4102-9800-7a6c36feee2b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.848442234Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=5a0e7ae7-6adc-4475-9af1-a32ccd757540 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.849190083Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=084bd3b5-68e3-49ec-b63f-d872bf59aca9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 09:56:09 force-systemd-env-646877 crio[840]: time="2026-01-10T09:56:09.849848693Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=b33c1697-d5e4-4488-8f30-8b3411415be4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 10:00:13.111273    5042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:13.111858    5042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:13.113392    5042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:13.113909    5042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 10:00:13.115509    5042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +38.319069] overlayfs: idmapped layers are currently not supported
	[Jan10 09:28] overlayfs: idmapped layers are currently not supported
	[  +3.010233] overlayfs: idmapped layers are currently not supported
	[Jan10 09:29] overlayfs: idmapped layers are currently not supported
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:00:13 up  2:42,  0 user,  load average: 0.74, 1.20, 1.94
	Linux force-systemd-env-646877 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 10:00:10 force-systemd-env-646877 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:00:11 force-systemd-env-646877 kubelet[4851]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:00:11 force-systemd-env-646877 kubelet[4851]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:00:11 force-systemd-env-646877 kubelet[4851]: E0110 10:00:11.221308    4851 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:00:11 force-systemd-env-646877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:00:12 force-systemd-env-646877 kubelet[4930]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:00:12 force-systemd-env-646877 kubelet[4930]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:00:12 force-systemd-env-646877 kubelet[4930]: E0110 10:00:12.018894    4930 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 10:00:12 force-systemd-env-646877 kubelet[4960]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:00:12 force-systemd-env-646877 kubelet[4960]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 10:00:12 force-systemd-env-646877 kubelet[4960]: E0110 10:00:12.747500    4960 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 10:00:12 force-systemd-env-646877 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-646877 -n force-systemd-env-646877
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-646877 -n force-systemd-env-646877: exit status 6 (358.267409ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 10:00:13.586266  494242 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-646877" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-646877" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-646877" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-646877
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-646877: (1.965223578s)
--- FAIL: TestForceSystemdEnv (506.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.33s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-320147 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-320147 --output=json --user=testUser: exit status 80 (2.332439577s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3fbd6f94-0b14-45c2-a9a9-7f7630817486","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-320147 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ffc5394a-188b-4eec-bab7-713e37e0d834","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T09:32:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"44892cff-89aa-403a-b781-bbeeb7e4f440","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-320147 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.33s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.09s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-320147 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-320147 --output=json --user=testUser: exit status 80 (2.092531241s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f3523d99-67f3-4b38-ad52-5d8428005d6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-320147 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fb2aefd4-7521-45a3-84de-486f5dbd4c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T09:32:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"09667488-a588-4682-8c01-817d6249888f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-320147 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.09s)

                                                
                                    
x
+
TestPause/serial/Pause (6.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-667994 --alsologtostderr -v=5
E0110 09:44:41.347062  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-667994 --alsologtostderr -v=5: exit status 80 (1.939581556s)

                                                
                                                
-- stdout --
	* Pausing node pause-667994 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:44:39.767886  439634 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:44:39.768103  439634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:44:39.768117  439634 out.go:374] Setting ErrFile to fd 2...
	I0110 09:44:39.768123  439634 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:44:39.768486  439634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:44:39.768954  439634 out.go:368] Setting JSON to false
	I0110 09:44:39.768993  439634 mustload.go:66] Loading cluster: pause-667994
	I0110 09:44:39.769633  439634 config.go:182] Loaded profile config "pause-667994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:44:39.770211  439634 cli_runner.go:164] Run: docker container inspect pause-667994 --format={{.State.Status}}
	I0110 09:44:39.786778  439634 host.go:66] Checking if "pause-667994" exists ...
	I0110 09:44:39.787106  439634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:44:39.843125  439634 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:65 SystemTime:2026-01-10 09:44:39.83366356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:44:39.843789  439634 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-667994 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 09:44:39.846940  439634 out.go:179] * Pausing node pause-667994 ... 
	I0110 09:44:39.850543  439634 host.go:66] Checking if "pause-667994" exists ...
	I0110 09:44:39.850884  439634 ssh_runner.go:195] Run: systemctl --version
	I0110 09:44:39.850941  439634 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:39.867534  439634 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33334 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/pause-667994/id_rsa Username:docker}
	I0110 09:44:39.971180  439634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:44:39.990259  439634 pause.go:52] kubelet running: true
	I0110 09:44:39.990332  439634 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 09:44:40.319983  439634 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 09:44:40.320073  439634 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 09:44:40.410595  439634 cri.go:96] found id: "9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c"
	I0110 09:44:40.410617  439634 cri.go:96] found id: "cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5"
	I0110 09:44:40.410628  439634 cri.go:96] found id: "21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4"
	I0110 09:44:40.410632  439634 cri.go:96] found id: "68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f"
	I0110 09:44:40.410636  439634 cri.go:96] found id: "52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c"
	I0110 09:44:40.410639  439634 cri.go:96] found id: "53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21"
	I0110 09:44:40.410642  439634 cri.go:96] found id: "ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1"
	I0110 09:44:40.410645  439634 cri.go:96] found id: "0784e58579d80d5fdf9ddd218fcc3557f470cd5dafeef80fe3b62c323d467f92"
	I0110 09:44:40.410653  439634 cri.go:96] found id: "a9ee5c4a9a8997f500d31311aaf7abec04fd144f6a38bf93ae0a1a7e06b8a4ec"
	I0110 09:44:40.410668  439634 cri.go:96] found id: "08ecb38cf4e5b37649293c433f10aa7f9823c2691ecdb51233eb8c3474936604"
	I0110 09:44:40.410675  439634 cri.go:96] found id: "3137a4adba2b54df4dbcba37d5c02ee6e8385e299cea66c5a18a8c78c7530e30"
	I0110 09:44:40.410678  439634 cri.go:96] found id: "b64a03dfccee29f0abd41c54d6eab5aad5bf378293778a8157db8c2e1453fdcb"
	I0110 09:44:40.410681  439634 cri.go:96] found id: "3e198530592feda9a423d035b1ef29be2edbe94052f9194b7ca5370b54f3e119"
	I0110 09:44:40.410684  439634 cri.go:96] found id: "6b2602db93009f92d4e46ce3746289ad07e294fdd5632f0cc9df5ca69568a037"
	I0110 09:44:40.410687  439634 cri.go:96] found id: ""
	I0110 09:44:40.410737  439634 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:44:40.422158  439634 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:44:40Z" level=error msg="open /run/runc: no such file or directory"
	I0110 09:44:40.661687  439634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:44:40.674908  439634 pause.go:52] kubelet running: false
	I0110 09:44:40.675019  439634 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 09:44:40.816239  439634 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 09:44:40.816366  439634 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 09:44:40.886283  439634 cri.go:96] found id: "9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c"
	I0110 09:44:40.886310  439634 cri.go:96] found id: "cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5"
	I0110 09:44:40.886316  439634 cri.go:96] found id: "21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4"
	I0110 09:44:40.886320  439634 cri.go:96] found id: "68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f"
	I0110 09:44:40.886324  439634 cri.go:96] found id: "52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c"
	I0110 09:44:40.886328  439634 cri.go:96] found id: "53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21"
	I0110 09:44:40.886332  439634 cri.go:96] found id: "ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1"
	I0110 09:44:40.886335  439634 cri.go:96] found id: "0784e58579d80d5fdf9ddd218fcc3557f470cd5dafeef80fe3b62c323d467f92"
	I0110 09:44:40.886338  439634 cri.go:96] found id: "a9ee5c4a9a8997f500d31311aaf7abec04fd144f6a38bf93ae0a1a7e06b8a4ec"
	I0110 09:44:40.886344  439634 cri.go:96] found id: "08ecb38cf4e5b37649293c433f10aa7f9823c2691ecdb51233eb8c3474936604"
	I0110 09:44:40.886348  439634 cri.go:96] found id: "3137a4adba2b54df4dbcba37d5c02ee6e8385e299cea66c5a18a8c78c7530e30"
	I0110 09:44:40.886351  439634 cri.go:96] found id: "b64a03dfccee29f0abd41c54d6eab5aad5bf378293778a8157db8c2e1453fdcb"
	I0110 09:44:40.886354  439634 cri.go:96] found id: "3e198530592feda9a423d035b1ef29be2edbe94052f9194b7ca5370b54f3e119"
	I0110 09:44:40.886357  439634 cri.go:96] found id: "6b2602db93009f92d4e46ce3746289ad07e294fdd5632f0cc9df5ca69568a037"
	I0110 09:44:40.886361  439634 cri.go:96] found id: ""
	I0110 09:44:40.886411  439634 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:44:41.401125  439634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:44:41.414375  439634 pause.go:52] kubelet running: false
	I0110 09:44:41.414482  439634 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 09:44:41.557217  439634 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 09:44:41.557342  439634 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 09:44:41.631087  439634 cri.go:96] found id: "9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c"
	I0110 09:44:41.631113  439634 cri.go:96] found id: "cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5"
	I0110 09:44:41.631118  439634 cri.go:96] found id: "21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4"
	I0110 09:44:41.631122  439634 cri.go:96] found id: "68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f"
	I0110 09:44:41.631125  439634 cri.go:96] found id: "52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c"
	I0110 09:44:41.631128  439634 cri.go:96] found id: "53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21"
	I0110 09:44:41.631131  439634 cri.go:96] found id: "ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1"
	I0110 09:44:41.631134  439634 cri.go:96] found id: "0784e58579d80d5fdf9ddd218fcc3557f470cd5dafeef80fe3b62c323d467f92"
	I0110 09:44:41.631138  439634 cri.go:96] found id: "a9ee5c4a9a8997f500d31311aaf7abec04fd144f6a38bf93ae0a1a7e06b8a4ec"
	I0110 09:44:41.631148  439634 cri.go:96] found id: "08ecb38cf4e5b37649293c433f10aa7f9823c2691ecdb51233eb8c3474936604"
	I0110 09:44:41.631152  439634 cri.go:96] found id: "3137a4adba2b54df4dbcba37d5c02ee6e8385e299cea66c5a18a8c78c7530e30"
	I0110 09:44:41.631155  439634 cri.go:96] found id: "b64a03dfccee29f0abd41c54d6eab5aad5bf378293778a8157db8c2e1453fdcb"
	I0110 09:44:41.631159  439634 cri.go:96] found id: "3e198530592feda9a423d035b1ef29be2edbe94052f9194b7ca5370b54f3e119"
	I0110 09:44:41.631163  439634 cri.go:96] found id: "6b2602db93009f92d4e46ce3746289ad07e294fdd5632f0cc9df5ca69568a037"
	I0110 09:44:41.631167  439634 cri.go:96] found id: ""
	I0110 09:44:41.631218  439634 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 09:44:41.645960  439634 out.go:203] 
	W0110 09:44:41.648881  439634 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:44:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:44:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 09:44:41.648948  439634 out.go:285] * 
	* 
	W0110 09:44:41.653199  439634 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 09:44:41.657848  439634 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-667994 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-667994
helpers_test.go:244: (dbg) docker inspect pause-667994:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4",
	        "Created": "2026-01-10T09:43:38.380700561Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T09:43:38.533344573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/hosts",
	        "LogPath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4-json.log",
	        "Name": "/pause-667994",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-667994:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-667994",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4",
	                "LowerDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-667994",
	                "Source": "/var/lib/docker/volumes/pause-667994/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-667994",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-667994",
	                "name.minikube.sigs.k8s.io": "pause-667994",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67f3ea46b1f1e1135407b8c43282379648175e923d2565637124ec9489ff8efb",
	            "SandboxKey": "/var/run/docker/netns/67f3ea46b1f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33338"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-667994": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:96:fe:98:02:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91bc10798e2b8f8d700f1fb87405c7bb915a3111bec2f90fb0f7d1eac54c1f8b",
	                    "EndpointID": "6d7d1a755edbdbc2b13e8b6c43f01e5ba27a2c86d6bd6e47e9b10b8e6b702ed8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-667994",
	                        "49ea7664f096"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-667994 -n pause-667994
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-667994 -n pause-667994: exit status 2 (338.191251ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-667994 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-667994 logs -n 25: (1.437820739s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p multinode-885817 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                 │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:40 UTC │ 10 Jan 26 09:41 UTC │
	│ node    │ list -p multinode-885817                                                                                         │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │                     │
	│ start   │ -p multinode-885817-m02 --driver=docker  --container-runtime=crio                                                │ multinode-885817-m02        │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │                     │
	│ start   │ -p multinode-885817-m03 --driver=docker  --container-runtime=crio                                                │ multinode-885817-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:41 UTC │
	│ node    │ add -p multinode-885817                                                                                          │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │                     │
	│ delete  │ -p multinode-885817-m03                                                                                          │ multinode-885817-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:41 UTC │
	│ delete  │ -p multinode-885817                                                                                              │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:41 UTC │
	│ start   │ -p scheduled-stop-472417 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:42 UTC │
	│ stop    │ -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --cancel-scheduled                                                                      │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │ 10 Jan 26 09:42 UTC │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │ 10 Jan 26 09:42 UTC │
	│ delete  │ -p scheduled-stop-472417                                                                                         │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:43 UTC │
	│ start   │ -p insufficient-storage-104326 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-104326 │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │                     │
	│ delete  │ -p insufficient-storage-104326                                                                                   │ insufficient-storage-104326 │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:43 UTC │
	│ start   │ -p pause-667994 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-667994                │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:44 UTC │
	│ start   │ -p missing-upgrade-191186 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-191186      │ jenkins │ v1.35.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:44 UTC │
	│ start   │ -p pause-667994 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-667994                │ jenkins │ v1.37.0 │ 10 Jan 26 09:44 UTC │ 10 Jan 26 09:44 UTC │
	│ pause   │ -p pause-667994 --alsologtostderr -v=5                                                                           │ pause-667994                │ jenkins │ v1.37.0 │ 10 Jan 26 09:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:44:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:44:22.689399  438388 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:44:22.689632  438388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:44:22.689658  438388 out.go:374] Setting ErrFile to fd 2...
	I0110 09:44:22.689678  438388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:44:22.689959  438388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:44:22.690501  438388 out.go:368] Setting JSON to false
	I0110 09:44:22.691609  438388 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8812,"bootTime":1768029451,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:44:22.691700  438388 start.go:143] virtualization:  
	I0110 09:44:22.696858  438388 out.go:179] * [pause-667994] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:44:22.700117  438388 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:44:22.700186  438388 notify.go:221] Checking for updates...
	I0110 09:44:22.706179  438388 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:44:22.709247  438388 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:44:22.712650  438388 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:44:22.715647  438388 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:44:22.718598  438388 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:44:22.722066  438388 config.go:182] Loaded profile config "pause-667994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:44:22.722805  438388 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:44:22.767204  438388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:44:22.767437  438388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:44:22.881226  438388 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2026-01-10 09:44:22.861373815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:44:22.881328  438388 docker.go:319] overlay module found
	I0110 09:44:22.884526  438388 out.go:179] * Using the docker driver based on existing profile
	I0110 09:44:22.887279  438388 start.go:309] selected driver: docker
	I0110 09:44:22.887298  438388 start.go:928] validating driver "docker" against &{Name:pause-667994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-667994 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:44:22.887422  438388 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:44:22.887529  438388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:44:22.993406  438388 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2026-01-10 09:44:22.983274863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:44:22.993796  438388 cni.go:84] Creating CNI manager for ""
	I0110 09:44:22.993844  438388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:44:22.993891  438388 start.go:353] cluster config:
	{Name:pause-667994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:44:22.997157  438388 out.go:179] * Starting "pause-667994" primary control-plane node in "pause-667994" cluster
	I0110 09:44:23.000038  438388 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:44:23.003239  438388 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:44:23.006193  438388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:44:23.006243  438388 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:44:23.006270  438388 cache.go:65] Caching tarball of preloaded images
	I0110 09:44:23.006378  438388 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 09:44:23.006389  438388 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 09:44:23.006546  438388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/config.json ...
	I0110 09:44:23.006812  438388 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:44:23.031099  438388 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 09:44:23.031120  438388 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 09:44:23.031135  438388 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:44:23.031170  438388 start.go:360] acquireMachinesLock for pause-667994: {Name:mk64d0aa0eb0a4232899ad56066cebf6daf8ba84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:44:23.031222  438388 start.go:364] duration metric: took 34.791µs to acquireMachinesLock for "pause-667994"
	I0110 09:44:23.031243  438388 start.go:96] Skipping create...Using existing machine configuration
	I0110 09:44:23.031247  438388 fix.go:54] fixHost starting: 
	I0110 09:44:23.031506  438388 cli_runner.go:164] Run: docker container inspect pause-667994 --format={{.State.Status}}
	I0110 09:44:23.051350  438388 fix.go:112] recreateIfNeeded on pause-667994: state=Running err=<nil>
	W0110 09:44:23.051379  438388 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 09:44:21.803128  433934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001483853s
	I0110 09:44:21.803208  433934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0110 09:44:23.053759  438388 out.go:252] * Updating the running docker "pause-667994" container ...
	I0110 09:44:23.053795  438388 machine.go:94] provisionDockerMachine start ...
	I0110 09:44:23.053889  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:23.086792  438388 main.go:144] libmachine: Using SSH client type: native
	I0110 09:44:23.087116  438388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33334 <nil> <nil>}
	I0110 09:44:23.087124  438388 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 09:44:23.293342  438388 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-667994
	
	I0110 09:44:23.293416  438388 ubuntu.go:182] provisioning hostname "pause-667994"
	I0110 09:44:23.293546  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:23.336889  438388 main.go:144] libmachine: Using SSH client type: native
	I0110 09:44:23.337203  438388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33334 <nil> <nil>}
	I0110 09:44:23.337213  438388 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-667994 && echo "pause-667994" | sudo tee /etc/hostname
	I0110 09:44:23.537534  438388 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-667994
	
	I0110 09:44:23.537618  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:23.566222  438388 main.go:144] libmachine: Using SSH client type: native
	I0110 09:44:23.566548  438388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33334 <nil> <nil>}
	I0110 09:44:23.566563  438388 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-667994' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-667994/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-667994' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 09:44:23.745893  438388 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 09:44:23.745971  438388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 09:44:23.746015  438388 ubuntu.go:190] setting up certificates
	I0110 09:44:23.746059  438388 provision.go:84] configureAuth start
	I0110 09:44:23.746167  438388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-667994
	I0110 09:44:23.774006  438388 provision.go:143] copyHostCerts
	I0110 09:44:23.774070  438388 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 09:44:23.774087  438388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 09:44:23.774173  438388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 09:44:23.774283  438388 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 09:44:23.774288  438388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 09:44:23.774314  438388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 09:44:23.774365  438388 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 09:44:23.774370  438388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 09:44:23.774392  438388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 09:44:23.774435  438388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.pause-667994 san=[127.0.0.1 192.168.76.2 localhost minikube pause-667994]
	I0110 09:44:24.438234  438388 provision.go:177] copyRemoteCerts
	I0110 09:44:24.438349  438388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 09:44:24.438438  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:24.457003  438388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33334 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/pause-667994/id_rsa Username:docker}
	I0110 09:44:24.573117  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 09:44:24.597956  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 09:44:24.622205  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 09:44:24.650887  438388 provision.go:87] duration metric: took 904.792105ms to configureAuth
	I0110 09:44:24.650964  438388 ubuntu.go:206] setting minikube options for container-runtime
	I0110 09:44:24.651226  438388 config.go:182] Loaded profile config "pause-667994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:44:24.651379  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:24.678749  438388 main.go:144] libmachine: Using SSH client type: native
	I0110 09:44:24.679069  438388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33334 <nil> <nil>}
	I0110 09:44:24.679083  438388 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 09:44:28.804631  433934 kubeadm.go:310] [api-check] The API server is healthy after 7.001500543s
	I0110 09:44:28.823953  433934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 09:44:28.839588  433934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 09:44:28.865598  433934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 09:44:28.865830  433934 kubeadm.go:310] [mark-control-plane] Marking the node missing-upgrade-191186 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 09:44:28.877830  433934 kubeadm.go:310] [bootstrap-token] Using token: 97wfxi.1fth5981xt81lgyc
	I0110 09:44:28.881041  433934 out.go:235]   - Configuring RBAC rules ...
	I0110 09:44:28.881163  433934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 09:44:28.889862  433934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 09:44:28.897546  433934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 09:44:28.903171  433934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 09:44:28.910816  433934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 09:44:28.917355  433934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 09:44:29.213934  433934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 09:44:29.640235  433934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0110 09:44:30.213727  433934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0110 09:44:30.214982  433934 kubeadm.go:310] 
	I0110 09:44:30.215045  433934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0110 09:44:30.215050  433934 kubeadm.go:310] 
	I0110 09:44:30.215138  433934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0110 09:44:30.215142  433934 kubeadm.go:310] 
	I0110 09:44:30.215167  433934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0110 09:44:30.215225  433934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 09:44:30.215275  433934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 09:44:30.215287  433934 kubeadm.go:310] 
	I0110 09:44:30.215340  433934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0110 09:44:30.215344  433934 kubeadm.go:310] 
	I0110 09:44:30.215391  433934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 09:44:30.215395  433934 kubeadm.go:310] 
	I0110 09:44:30.215446  433934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0110 09:44:30.215519  433934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 09:44:30.215587  433934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 09:44:30.215590  433934 kubeadm.go:310] 
	I0110 09:44:30.215673  433934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 09:44:30.215748  433934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0110 09:44:30.215752  433934 kubeadm.go:310] 
	I0110 09:44:30.215835  433934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 97wfxi.1fth5981xt81lgyc \
	I0110 09:44:30.215939  433934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e \
	I0110 09:44:30.215959  433934 kubeadm.go:310] 	--control-plane 
	I0110 09:44:30.215962  433934 kubeadm.go:310] 
	I0110 09:44:30.216046  433934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0110 09:44:30.216049  433934 kubeadm.go:310] 
	I0110 09:44:30.216130  433934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 97wfxi.1fth5981xt81lgyc \
	I0110 09:44:30.216231  433934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e 
	I0110 09:44:30.219314  433934 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0110 09:44:30.219531  433934 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 09:44:30.219634  433934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 09:44:30.219650  433934 cni.go:84] Creating CNI manager for ""
	I0110 09:44:30.219657  433934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:44:30.224762  433934 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0110 09:44:30.228128  433934 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 09:44:30.232784  433934 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.0/kubectl ...
	I0110 09:44:30.232795  433934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0110 09:44:30.256710  433934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 09:44:31.434713  433934 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.177977042s)
	I0110 09:44:31.434742  433934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 09:44:31.434861  433934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 09:44:31.434928  433934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes missing-upgrade-191186 minikube.k8s.io/updated_at=2026_01_10T09_44_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=dd5d320e41b5451cdf3c01891bc4e13d189586ed-dirty minikube.k8s.io/name=missing-upgrade-191186 minikube.k8s.io/primary=true
	I0110 09:44:31.729983  433934 ops.go:34] apiserver oom_adj: -16
	I0110 09:44:31.730003  433934 kubeadm.go:1113] duration metric: took 295.188147ms to wait for elevateKubeSystemPrivileges
	I0110 09:44:31.730017  433934 kubeadm.go:394] duration metric: took 18.705117094s to StartCluster
	I0110 09:44:31.730042  433934 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:44:31.730098  433934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:44:31.731145  433934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:44:31.731364  433934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:44:31.731458  433934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 09:44:31.731730  433934 config.go:182] Loaded profile config "missing-upgrade-191186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 09:44:31.731765  433934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 09:44:31.731836  433934 addons.go:69] Setting storage-provisioner=true in profile "missing-upgrade-191186"
	I0110 09:44:31.731860  433934 addons.go:238] Setting addon storage-provisioner=true in "missing-upgrade-191186"
	I0110 09:44:31.731883  433934 host.go:66] Checking if "missing-upgrade-191186" exists ...
	I0110 09:44:31.732379  433934 cli_runner.go:164] Run: docker container inspect missing-upgrade-191186 --format={{.State.Status}}
	I0110 09:44:31.732826  433934 addons.go:69] Setting default-storageclass=true in profile "missing-upgrade-191186"
	I0110 09:44:31.732842  433934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "missing-upgrade-191186"
	I0110 09:44:31.733361  433934 cli_runner.go:164] Run: docker container inspect missing-upgrade-191186 --format={{.State.Status}}
	I0110 09:44:31.736677  433934 out.go:177] * Verifying Kubernetes components...
	I0110 09:44:31.738299  433934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:44:31.772590  433934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 09:44:30.180578  438388 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 09:44:30.180601  438388 machine.go:97] duration metric: took 7.126790266s to provisionDockerMachine
	I0110 09:44:30.180614  438388 start.go:293] postStartSetup for "pause-667994" (driver="docker")
	I0110 09:44:30.180625  438388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 09:44:30.180689  438388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 09:44:30.180740  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:30.201542  438388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33334 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/pause-667994/id_rsa Username:docker}
	I0110 09:44:30.319120  438388 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 09:44:30.323494  438388 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 09:44:30.323528  438388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 09:44:30.323543  438388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 09:44:30.323605  438388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 09:44:30.323704  438388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 09:44:30.323823  438388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 09:44:30.332381  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:44:30.353827  438388 start.go:296] duration metric: took 173.19597ms for postStartSetup
	I0110 09:44:30.353920  438388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:44:30.353976  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:30.372584  438388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33334 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/pause-667994/id_rsa Username:docker}
	I0110 09:44:30.486871  438388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 09:44:30.492938  438388 fix.go:56] duration metric: took 7.46167172s for fixHost
	I0110 09:44:30.492972  438388 start.go:83] releasing machines lock for "pause-667994", held for 7.461741892s
	I0110 09:44:30.493068  438388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-667994
	I0110 09:44:30.512453  438388 ssh_runner.go:195] Run: cat /version.json
	I0110 09:44:30.512546  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:30.512943  438388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 09:44:30.513011  438388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-667994
	I0110 09:44:30.531625  438388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33334 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/pause-667994/id_rsa Username:docker}
	I0110 09:44:30.542230  438388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33334 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/pause-667994/id_rsa Username:docker}
	I0110 09:44:30.657930  438388 ssh_runner.go:195] Run: systemctl --version
	I0110 09:44:30.787211  438388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 09:44:30.938114  438388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 09:44:30.943343  438388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 09:44:30.943417  438388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 09:44:30.958574  438388 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 09:44:30.958656  438388 start.go:496] detecting cgroup driver to use...
	I0110 09:44:30.958721  438388 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 09:44:30.958814  438388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 09:44:30.980013  438388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 09:44:31.000199  438388 docker.go:218] disabling cri-docker service (if available) ...
	I0110 09:44:31.000287  438388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 09:44:31.023428  438388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 09:44:31.038332  438388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 09:44:31.222446  438388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 09:44:31.416703  438388 docker.go:234] disabling docker service ...
	I0110 09:44:31.416787  438388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 09:44:31.434878  438388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 09:44:31.457960  438388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 09:44:31.659660  438388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 09:44:31.927917  438388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 09:44:31.954513  438388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 09:44:31.982163  438388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 09:44:31.982234  438388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:31.998963  438388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 09:44:31.999027  438388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:32.017020  438388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:32.033037  438388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:32.050220  438388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 09:44:32.063905  438388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:32.075893  438388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:32.091497  438388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 09:44:32.106978  438388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 09:44:32.117987  438388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 09:44:32.133365  438388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:44:32.341135  438388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 09:44:32.627998  438388 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 09:44:32.628081  438388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 09:44:32.635472  438388 start.go:574] Will wait 60s for crictl version
	I0110 09:44:32.635602  438388 ssh_runner.go:195] Run: which crictl
	I0110 09:44:32.643318  438388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 09:44:32.697827  438388 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 09:44:32.698026  438388 ssh_runner.go:195] Run: crio --version
	I0110 09:44:32.734854  438388 ssh_runner.go:195] Run: crio --version
	I0110 09:44:32.783469  438388 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 09:44:31.774551  433934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 09:44:31.774562  433934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 09:44:31.774625  433934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-191186
	I0110 09:44:31.791172  433934 addons.go:238] Setting addon default-storageclass=true in "missing-upgrade-191186"
	I0110 09:44:31.791201  433934 host.go:66] Checking if "missing-upgrade-191186" exists ...
	I0110 09:44:31.791646  433934 cli_runner.go:164] Run: docker container inspect missing-upgrade-191186 --format={{.State.Status}}
	I0110 09:44:31.813535  433934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33339 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/missing-upgrade-191186/id_rsa Username:docker}
	I0110 09:44:31.836539  433934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 09:44:31.836552  433934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 09:44:31.836619  433934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-191186
	I0110 09:44:31.867075  433934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33339 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/missing-upgrade-191186/id_rsa Username:docker}
	I0110 09:44:32.090912  433934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 09:44:32.100483  433934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 09:44:32.124179  433934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 09:44:32.218599  433934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:44:32.779683  433934 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0110 09:44:33.046934  433934 api_server.go:52] waiting for apiserver process to appear ...
	I0110 09:44:33.046996  433934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:44:33.054123  433934 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0110 09:44:33.057373  433934 addons.go:514] duration metric: took 1.325589s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0110 09:44:33.079868  433934 api_server.go:72] duration metric: took 1.348476884s to wait for apiserver process to appear ...
	I0110 09:44:33.079883  433934 api_server.go:88] waiting for apiserver healthz status ...
	I0110 09:44:33.079905  433934 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0110 09:44:33.105030  433934 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0110 09:44:33.115680  433934 api_server.go:141] control plane version: v1.32.0
	I0110 09:44:33.115698  433934 api_server.go:131] duration metric: took 35.809536ms to wait for apiserver health ...
	I0110 09:44:33.115706  433934 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 09:44:33.134904  433934 system_pods.go:59] 5 kube-system pods found
	I0110 09:44:33.134924  433934 system_pods.go:61] "etcd-missing-upgrade-191186" [c9c057cb-c896-4d7e-912c-be75874ae932] Running
	I0110 09:44:33.134932  433934 system_pods.go:61] "kube-apiserver-missing-upgrade-191186" [0bd3bd6d-3b23-48b3-bae8-5688f1497f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 09:44:33.134938  433934 system_pods.go:61] "kube-controller-manager-missing-upgrade-191186" [aa0e29ff-edf0-4c26-81cb-e972ec91373d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 09:44:33.134946  433934 system_pods.go:61] "kube-scheduler-missing-upgrade-191186" [d8f4240c-5470-425e-9d83-cee9546d4718] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 09:44:33.134954  433934 system_pods.go:61] "storage-provisioner" [bb92f2d6-0a19-477f-8a00-277659910512] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 09:44:33.134980  433934 system_pods.go:74] duration metric: took 19.248288ms to wait for pod list to return data ...
	I0110 09:44:33.134990  433934 kubeadm.go:582] duration metric: took 1.403606396s to wait for: map[apiserver:true system_pods:true]
	I0110 09:44:33.135004  433934 node_conditions.go:102] verifying NodePressure condition ...
	I0110 09:44:33.140171  433934 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 09:44:33.140199  433934 node_conditions.go:123] node cpu capacity is 2
	I0110 09:44:33.140209  433934 node_conditions.go:105] duration metric: took 5.200506ms to run NodePressure ...
	I0110 09:44:33.140221  433934 start.go:241] waiting for startup goroutines ...
	I0110 09:44:33.284279  433934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "missing-upgrade-191186" context rescaled to 1 replicas
	I0110 09:44:33.284303  433934 start.go:246] waiting for cluster config update ...
	I0110 09:44:33.284314  433934 start.go:255] writing updated cluster config ...
	I0110 09:44:33.284637  433934 ssh_runner.go:195] Run: rm -f paused
	I0110 09:44:33.390722  433934 start.go:600] kubectl: 1.33.2, cluster: 1.32.0 (minor skew: 1)
	I0110 09:44:33.435209  433934 out.go:177] * Done! kubectl is now configured to use "missing-upgrade-191186" cluster and "default" namespace by default
	I0110 09:44:32.786255  438388 cli_runner.go:164] Run: docker network inspect pause-667994 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 09:44:32.814312  438388 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 09:44:32.819111  438388 kubeadm.go:884] updating cluster {Name:pause-667994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 09:44:32.819253  438388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 09:44:32.819302  438388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:44:32.870611  438388 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:44:32.870632  438388 crio.go:433] Images already preloaded, skipping extraction
	I0110 09:44:32.870696  438388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 09:44:32.937487  438388 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 09:44:32.937551  438388 cache_images.go:86] Images are preloaded, skipping loading
	I0110 09:44:32.937575  438388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 09:44:32.937708  438388 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-667994 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 09:44:32.937817  438388 ssh_runner.go:195] Run: crio config
	I0110 09:44:33.158753  438388 cni.go:84] Creating CNI manager for ""
	I0110 09:44:33.158819  438388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:44:33.158851  438388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 09:44:33.158888  438388 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-667994 NodeName:pause-667994 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 09:44:33.159060  438388 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-667994"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 09:44:33.159147  438388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 09:44:33.173495  438388 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 09:44:33.173612  438388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 09:44:33.203563  438388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 09:44:33.228826  438388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 09:44:33.317086  438388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0110 09:44:33.372421  438388 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 09:44:33.393404  438388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:44:33.808546  438388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:44:33.827171  438388 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994 for IP: 192.168.76.2
	I0110 09:44:33.827193  438388 certs.go:195] generating shared ca certs ...
	I0110 09:44:33.827210  438388 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:44:33.827348  438388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 09:44:33.827404  438388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 09:44:33.827416  438388 certs.go:257] generating profile certs ...
	I0110 09:44:33.827502  438388 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/client.key
	I0110 09:44:33.827570  438388 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/apiserver.key.1f2993f2
	I0110 09:44:33.827622  438388 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/proxy-client.key
	I0110 09:44:33.827731  438388 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 09:44:33.827766  438388 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 09:44:33.827779  438388 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 09:44:33.827804  438388 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 09:44:33.827832  438388 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 09:44:33.827858  438388 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 09:44:33.827908  438388 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 09:44:33.828484  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 09:44:33.853278  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 09:44:33.887041  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 09:44:33.918410  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 09:44:33.952542  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 09:44:33.997742  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 09:44:34.044879  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 09:44:34.087985  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 09:44:34.122581  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 09:44:34.157172  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 09:44:34.191295  438388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 09:44:34.223150  438388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 09:44:34.242288  438388 ssh_runner.go:195] Run: openssl version
	I0110 09:44:34.253498  438388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 09:44:34.265939  438388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 09:44:34.277826  438388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 09:44:34.284396  438388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 09:44:34.284550  438388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 09:44:34.354167  438388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 09:44:34.365855  438388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:44:34.380979  438388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 09:44:34.400238  438388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:44:34.409237  438388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:44:34.409402  438388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 09:44:34.471406  438388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 09:44:34.481659  438388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 09:44:34.494286  438388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 09:44:34.509988  438388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 09:44:34.516377  438388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 09:44:34.516544  438388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 09:44:34.580061  438388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 09:44:34.594381  438388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 09:44:34.598651  438388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 09:44:34.659284  438388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 09:44:34.712620  438388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 09:44:34.765634  438388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 09:44:34.813019  438388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 09:44:34.863291  438388 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 09:44:34.911941  438388 kubeadm.go:401] StartCluster: {Name:pause-667994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-667994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:44:34.912151  438388 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 09:44:34.912251  438388 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 09:44:34.965781  438388 cri.go:96] found id: "9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c"
	I0110 09:44:34.965861  438388 cri.go:96] found id: "cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5"
	I0110 09:44:34.965879  438388 cri.go:96] found id: "21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4"
	I0110 09:44:34.965900  438388 cri.go:96] found id: "68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f"
	I0110 09:44:34.965934  438388 cri.go:96] found id: "52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c"
	I0110 09:44:34.965959  438388 cri.go:96] found id: "53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21"
	I0110 09:44:34.965980  438388 cri.go:96] found id: "ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1"
	I0110 09:44:34.966012  438388 cri.go:96] found id: "0784e58579d80d5fdf9ddd218fcc3557f470cd5dafeef80fe3b62c323d467f92"
	I0110 09:44:34.966033  438388 cri.go:96] found id: "a9ee5c4a9a8997f500d31311aaf7abec04fd144f6a38bf93ae0a1a7e06b8a4ec"
	I0110 09:44:34.966055  438388 cri.go:96] found id: "08ecb38cf4e5b37649293c433f10aa7f9823c2691ecdb51233eb8c3474936604"
	I0110 09:44:34.966075  438388 cri.go:96] found id: "3137a4adba2b54df4dbcba37d5c02ee6e8385e299cea66c5a18a8c78c7530e30"
	I0110 09:44:34.966107  438388 cri.go:96] found id: "b64a03dfccee29f0abd41c54d6eab5aad5bf378293778a8157db8c2e1453fdcb"
	I0110 09:44:34.966123  438388 cri.go:96] found id: "3e198530592feda9a423d035b1ef29be2edbe94052f9194b7ca5370b54f3e119"
	I0110 09:44:34.966141  438388 cri.go:96] found id: "6b2602db93009f92d4e46ce3746289ad07e294fdd5632f0cc9df5ca69568a037"
	I0110 09:44:34.966176  438388 cri.go:96] found id: ""
	I0110 09:44:34.966265  438388 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 09:44:34.997589  438388 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:44:34Z" level=error msg="open /run/runc: no such file or directory"
	I0110 09:44:34.997713  438388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 09:44:35.015800  438388 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 09:44:35.015873  438388 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 09:44:35.015964  438388 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 09:44:35.026213  438388 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 09:44:35.027047  438388 kubeconfig.go:125] found "pause-667994" server: "https://192.168.76.2:8443"
	I0110 09:44:35.028055  438388 kapi.go:59] client config for pause-667994: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/client.crt", KeyFile:"/home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/client.key", CAFile:"/home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 09:44:35.028898  438388 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 09:44:35.028953  438388 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 09:44:35.028973  438388 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 09:44:35.029007  438388 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 09:44:35.029031  438388 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 09:44:35.029052  438388 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 09:44:35.029409  438388 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 09:44:35.042024  438388 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 09:44:35.042110  438388 kubeadm.go:602] duration metric: took 26.215836ms to restartPrimaryControlPlane
	I0110 09:44:35.042134  438388 kubeadm.go:403] duration metric: took 130.20941ms to StartCluster
	I0110 09:44:35.042187  438388 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:44:35.042287  438388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:44:35.043395  438388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:44:35.043715  438388 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 09:44:35.044258  438388 config.go:182] Loaded profile config "pause-667994": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:44:35.044242  438388 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 09:44:35.050009  438388 out.go:179] * Enabled addons: 
	I0110 09:44:35.050134  438388 out.go:179] * Verifying Kubernetes components...
	I0110 09:44:35.052818  438388 addons.go:530] duration metric: took 8.57559ms for enable addons: enabled=[]
	I0110 09:44:35.052981  438388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 09:44:35.304241  438388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 09:44:35.325362  438388 node_ready.go:35] waiting up to 6m0s for node "pause-667994" to be "Ready" ...
	I0110 09:44:36.934588  438388 node_ready.go:49] node "pause-667994" is "Ready"
	I0110 09:44:36.934666  438388 node_ready.go:38] duration metric: took 1.609275298s for node "pause-667994" to be "Ready" ...
	I0110 09:44:36.934694  438388 api_server.go:52] waiting for apiserver process to appear ...
	I0110 09:44:36.934788  438388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:44:36.969069  438388 api_server.go:72] duration metric: took 1.925300854s to wait for apiserver process to appear ...
	I0110 09:44:36.969150  438388 api_server.go:88] waiting for apiserver healthz status ...
	I0110 09:44:36.969203  438388 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 09:44:36.984118  438388 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 09:44:36.984196  438388 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 09:44:37.469880  438388 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 09:44:37.478631  438388 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 09:44:37.478663  438388 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 09:44:37.969346  438388 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 09:44:37.978118  438388 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 09:44:37.979247  438388 api_server.go:141] control plane version: v1.35.0
	I0110 09:44:37.979278  438388 api_server.go:131] duration metric: took 1.010089728s to wait for apiserver health ...
	I0110 09:44:37.979290  438388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 09:44:37.982866  438388 system_pods.go:59] 7 kube-system pods found
	I0110 09:44:37.982901  438388 system_pods.go:61] "coredns-7d764666f9-k85wx" [23fc4d7b-fe46-4d48-8fad-90d3528fa2bf] Running
	I0110 09:44:37.982908  438388 system_pods.go:61] "etcd-pause-667994" [41010a59-18de-4cfb-a80e-3855469efcc5] Running
	I0110 09:44:37.982913  438388 system_pods.go:61] "kindnet-zflh7" [598cecc0-71a3-42e3-939d-6d1fa94bf8d4] Running
	I0110 09:44:37.982918  438388 system_pods.go:61] "kube-apiserver-pause-667994" [b67ad2d0-e0a7-4fde-a8ed-b6e4b1be1fcd] Running
	I0110 09:44:37.982923  438388 system_pods.go:61] "kube-controller-manager-pause-667994" [c464575c-7146-4161-a5cf-0e901eb7d210] Running
	I0110 09:44:37.982927  438388 system_pods.go:61] "kube-proxy-np729" [1f9a69f0-2849-4a4b-8636-d8f9e1b0de26] Running
	I0110 09:44:37.982931  438388 system_pods.go:61] "kube-scheduler-pause-667994" [e3b3d147-5983-4fd9-ba96-6cd023f97d4b] Running
	I0110 09:44:37.982936  438388 system_pods.go:74] duration metric: took 3.641902ms to wait for pod list to return data ...
	I0110 09:44:37.982950  438388 default_sa.go:34] waiting for default service account to be created ...
	I0110 09:44:37.985980  438388 default_sa.go:45] found service account: "default"
	I0110 09:44:37.986051  438388 default_sa.go:55] duration metric: took 3.090369ms for default service account to be created ...
	I0110 09:44:37.986078  438388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 09:44:37.989349  438388 system_pods.go:86] 7 kube-system pods found
	I0110 09:44:37.989380  438388 system_pods.go:89] "coredns-7d764666f9-k85wx" [23fc4d7b-fe46-4d48-8fad-90d3528fa2bf] Running
	I0110 09:44:37.989387  438388 system_pods.go:89] "etcd-pause-667994" [41010a59-18de-4cfb-a80e-3855469efcc5] Running
	I0110 09:44:37.989392  438388 system_pods.go:89] "kindnet-zflh7" [598cecc0-71a3-42e3-939d-6d1fa94bf8d4] Running
	I0110 09:44:37.989397  438388 system_pods.go:89] "kube-apiserver-pause-667994" [b67ad2d0-e0a7-4fde-a8ed-b6e4b1be1fcd] Running
	I0110 09:44:37.989402  438388 system_pods.go:89] "kube-controller-manager-pause-667994" [c464575c-7146-4161-a5cf-0e901eb7d210] Running
	I0110 09:44:37.989407  438388 system_pods.go:89] "kube-proxy-np729" [1f9a69f0-2849-4a4b-8636-d8f9e1b0de26] Running
	I0110 09:44:37.989417  438388 system_pods.go:89] "kube-scheduler-pause-667994" [e3b3d147-5983-4fd9-ba96-6cd023f97d4b] Running
	I0110 09:44:37.989425  438388 system_pods.go:126] duration metric: took 3.328773ms to wait for k8s-apps to be running ...
	I0110 09:44:37.989437  438388 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 09:44:37.989498  438388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:44:38.006689  438388 system_svc.go:56] duration metric: took 17.241077ms WaitForService to wait for kubelet
	I0110 09:44:38.006724  438388 kubeadm.go:587] duration metric: took 2.962959358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 09:44:38.006744  438388 node_conditions.go:102] verifying NodePressure condition ...
	I0110 09:44:38.010504  438388 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 09:44:38.010539  438388 node_conditions.go:123] node cpu capacity is 2
	I0110 09:44:38.010554  438388 node_conditions.go:105] duration metric: took 3.803981ms to run NodePressure ...
	I0110 09:44:38.010568  438388 start.go:242] waiting for startup goroutines ...
	I0110 09:44:38.010576  438388 start.go:247] waiting for cluster config update ...
	I0110 09:44:38.010585  438388 start.go:256] writing updated cluster config ...
	I0110 09:44:38.010914  438388 ssh_runner.go:195] Run: rm -f paused
	I0110 09:44:38.015030  438388 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 09:44:38.015697  438388 kapi.go:59] client config for pause-667994: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/client.crt", KeyFile:"/home/jenkins/minikube-integration/22427-308033/.minikube/profiles/pause-667994/client.key", CAFile:"/home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 09:44:38.083697  438388 pod_ready.go:83] waiting for pod "coredns-7d764666f9-k85wx" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.089116  438388 pod_ready.go:94] pod "coredns-7d764666f9-k85wx" is "Ready"
	I0110 09:44:38.089146  438388 pod_ready.go:86] duration metric: took 5.417945ms for pod "coredns-7d764666f9-k85wx" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.092016  438388 pod_ready.go:83] waiting for pod "etcd-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.097186  438388 pod_ready.go:94] pod "etcd-pause-667994" is "Ready"
	I0110 09:44:38.097216  438388 pod_ready.go:86] duration metric: took 5.17498ms for pod "etcd-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.099852  438388 pod_ready.go:83] waiting for pod "kube-apiserver-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.104653  438388 pod_ready.go:94] pod "kube-apiserver-pause-667994" is "Ready"
	I0110 09:44:38.104726  438388 pod_ready.go:86] duration metric: took 4.847369ms for pod "kube-apiserver-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.107152  438388 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.419860  438388 pod_ready.go:94] pod "kube-controller-manager-pause-667994" is "Ready"
	I0110 09:44:38.419891  438388 pod_ready.go:86] duration metric: took 312.714993ms for pod "kube-controller-manager-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:38.618857  438388 pod_ready.go:83] waiting for pod "kube-proxy-np729" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:39.019043  438388 pod_ready.go:94] pod "kube-proxy-np729" is "Ready"
	I0110 09:44:39.019075  438388 pod_ready.go:86] duration metric: took 400.186004ms for pod "kube-proxy-np729" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:39.219081  438388 pod_ready.go:83] waiting for pod "kube-scheduler-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:39.619575  438388 pod_ready.go:94] pod "kube-scheduler-pause-667994" is "Ready"
	I0110 09:44:39.619603  438388 pod_ready.go:86] duration metric: took 400.486052ms for pod "kube-scheduler-pause-667994" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 09:44:39.619618  438388 pod_ready.go:40] duration metric: took 1.604551894s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 09:44:39.677266  438388 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 09:44:39.680669  438388 out.go:203] 
	W0110 09:44:39.683455  438388 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 09:44:39.686355  438388 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 09:44:39.689215  438388 out.go:179] * Done! kubectl is now configured to use "pause-667994" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.166051011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.171579785Z" level=info msg="Created container ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1: kube-system/kube-apiserver-pause-667994/kube-apiserver" id=c530e374-7fec-46f1-941d-1fb4791b69f4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.18108311Z" level=info msg="Creating container: kube-system/kube-scheduler-pause-667994/kube-scheduler" id=4a049ea9-0507-419b-8458-9576b1c34b23 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.181237476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.181816037Z" level=info msg="Starting container: 53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21" id=8e25465a-ac19-4bfe-aa34-297b6a04c5d0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.182829517Z" level=info msg="Starting container: ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1" id=5934b3ab-401c-4a58-bccd-adc14f87a33c name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.187022115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.187506807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.197241217Z" level=info msg="Started container" PID=2208 containerID=52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c description=kube-system/etcd-pause-667994/etcd id=4ab96ddd-48b7-42f9-9539-47ca98ca7db7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03799d5b5fb3c59e771b5648ab1afc8b1fad7fe60ba65379db66b67e6506e78a
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.204388117Z" level=info msg="Created container 68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f: kube-system/kindnet-zflh7/kindnet-cni" id=6eeee982-164c-4785-b1a4-3d313190a033 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.233132329Z" level=info msg="Started container" PID=2201 containerID=53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21 description=kube-system/kube-controller-manager-pause-667994/kube-controller-manager id=8e25465a-ac19-4bfe-aa34-297b6a04c5d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf399b5021bc6d3182dbc4c2da04758339777866059ca813bc2ac17b084758c3
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.23955148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.243295267Z" level=info msg="Starting container: 68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f" id=95acd06c-f871-4d4a-8be2-71767f2bb815 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.243806807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.261338375Z" level=info msg="Started container" PID=2195 containerID=ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1 description=kube-system/kube-apiserver-pause-667994/kube-apiserver id=5934b3ab-401c-4a58-bccd-adc14f87a33c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac885fdc2bdf47ffa7b3b7637cfa57a748ea2b171aab4ae90e4a054910c2c02a
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.277628356Z" level=info msg="Started container" PID=2213 containerID=68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f description=kube-system/kindnet-zflh7/kindnet-cni id=95acd06c-f871-4d4a-8be2-71767f2bb815 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b44661d7c2b8767311d7161e035d937e5cc485bbec332bde3afd7eb76d21d3a
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.335456421Z" level=info msg="Created container 9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c: kube-system/kube-scheduler-pause-667994/kube-scheduler" id=4a049ea9-0507-419b-8458-9576b1c34b23 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.337432697Z" level=info msg="Starting container: 9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c" id=fdaf268a-8dbb-47cf-82f1-cd77326dfcb1 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.346948347Z" level=info msg="Created container 21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4: kube-system/coredns-7d764666f9-k85wx/coredns" id=5be89a7b-5dcc-438c-98e6-207ce147a696 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.35074948Z" level=info msg="Starting container: 21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4" id=e2bdb6fd-9a3d-4843-9f92-825d6333fbdd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.352772806Z" level=info msg="Started container" PID=2254 containerID=9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c description=kube-system/kube-scheduler-pause-667994/kube-scheduler id=fdaf268a-8dbb-47cf-82f1-cd77326dfcb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f435bcb1c7302e13a50c964f210b7ee88386e4283069a14220c2d578e1e7a986
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.354951211Z" level=info msg="Started container" PID=2242 containerID=21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4 description=kube-system/coredns-7d764666f9-k85wx/coredns id=e2bdb6fd-9a3d-4843-9f92-825d6333fbdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35532fd21f54ac7df4cb44f0da8e222ff754574cdd0062aae9c8225e9401941
	Jan 10 09:44:34 pause-667994 crio[2101]: time="2026-01-10T09:44:34.025367235Z" level=info msg="Created container cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5: kube-system/kube-proxy-np729/kube-proxy" id=dd27e74d-3608-4bdd-8ed0-f79a3a817398 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:34 pause-667994 crio[2101]: time="2026-01-10T09:44:34.029072703Z" level=info msg="Starting container: cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5" id=0613a482-8f0a-4bab-8bd9-00f3a55b2ae1 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:34 pause-667994 crio[2101]: time="2026-01-10T09:44:34.032135938Z" level=info msg="Started container" PID=2241 containerID=cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5 description=kube-system/kube-proxy-np729/kube-proxy id=0613a482-8f0a-4bab-8bd9-00f3a55b2ae1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ef9575200440a488bc28644a3c652716aa5c866e9cafeace9834ec1355285ca
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	9a2e759498c31       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     9 seconds ago       Running             kube-scheduler            1                   f435bcb1c7302       kube-scheduler-pause-667994            kube-system
	cbd52ff43c844       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     9 seconds ago       Running             kube-proxy                1                   0ef9575200440       kube-proxy-np729                       kube-system
	21e6fbd88cb50       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     9 seconds ago       Running             coredns                   1                   e35532fd21f54       coredns-7d764666f9-k85wx               kube-system
	68fb1a0dce1c9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     9 seconds ago       Running             kindnet-cni               1                   2b44661d7c2b8       kindnet-zflh7                          kube-system
	52a15e29b6810       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     9 seconds ago       Running             etcd                      1                   03799d5b5fb3c       etcd-pause-667994                      kube-system
	53f6564b9ab16       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     9 seconds ago       Running             kube-controller-manager   1                   cf399b5021bc6       kube-controller-manager-pause-667994   kube-system
	ca7b22c044279       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     9 seconds ago       Running             kube-apiserver            1                   ac885fdc2bdf4       kube-apiserver-pause-667994            kube-system
	0784e58579d80       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     22 seconds ago      Exited              coredns                   0                   e35532fd21f54       coredns-7d764666f9-k85wx               kube-system
	a9ee5c4a9a899       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   33 seconds ago      Exited              kindnet-cni               0                   2b44661d7c2b8       kindnet-zflh7                          kube-system
	08ecb38cf4e5b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     35 seconds ago      Exited              kube-proxy                0                   0ef9575200440       kube-proxy-np729                       kube-system
	3137a4adba2b5       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     49 seconds ago      Exited              kube-apiserver            0                   ac885fdc2bdf4       kube-apiserver-pause-667994            kube-system
	b64a03dfccee2       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     49 seconds ago      Exited              kube-scheduler            0                   f435bcb1c7302       kube-scheduler-pause-667994            kube-system
	3e198530592fe       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     49 seconds ago      Exited              kube-controller-manager   0                   cf399b5021bc6       kube-controller-manager-pause-667994   kube-system
	6b2602db93009       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     49 seconds ago      Exited              etcd                      0                   03799d5b5fb3c       etcd-pause-667994                      kube-system
	
	
	==> coredns [0784e58579d80d5fdf9ddd218fcc3557f470cd5dafeef80fe3b62c323d467f92] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42133 - 19475 "HINFO IN 6944866512423796572.2950916723408314853. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052145536s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45793 - 55333 "HINFO IN 8773195563947633132.7269882810796692473. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010811563s
	
	
	==> describe nodes <==
	Name:               pause-667994
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-667994
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=pause-667994
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T09_44_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 09:43:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-667994
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 09:44:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:43:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:43:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:43:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:44:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-667994
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                bb4a1973-b020-4b0d-a4b8-c5a69bdeb681
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-k85wx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     37s
	  kube-system                 etcd-pause-667994                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-zflh7                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-pause-667994             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-pause-667994    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-np729                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-pause-667994             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  38s   node-controller  Node pause-667994 event: Registered Node pause-667994 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node pause-667994 event: Registered Node pause-667994 in Controller
	
	
	==> dmesg <==
	[Jan10 09:22] overlayfs: idmapped layers are currently not supported
	[Jan10 09:23] overlayfs: idmapped layers are currently not supported
	[Jan10 09:24] overlayfs: idmapped layers are currently not supported
	[Jan10 09:25] overlayfs: idmapped layers are currently not supported
	[  +3.457822] overlayfs: idmapped layers are currently not supported
	[Jan10 09:27] overlayfs: idmapped layers are currently not supported
	[ +38.319069] overlayfs: idmapped layers are currently not supported
	[Jan10 09:28] overlayfs: idmapped layers are currently not supported
	[  +3.010233] overlayfs: idmapped layers are currently not supported
	[Jan10 09:29] overlayfs: idmapped layers are currently not supported
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c] <==
	{"level":"info","ts":"2026-01-10T09:44:33.625189Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T09:44:33.647335Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T09:44:33.647404Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T09:44:33.625254Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T09:44:33.709041Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T09:44:33.625547Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T09:44:33.681319Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T09:44:33.681390Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T09:44:33.709982Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T09:44:33.710083Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T09:44:33.710169Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T09:44:33.710221Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.712535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.712604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T09:44:33.712651Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.712706Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.744802Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-667994 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T09:44:33.744924Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T09:44:33.745176Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T09:44:33.746061Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T09:44:33.748172Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T09:44:33.748300Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T09:44:33.748349Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T09:44:33.769312Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T09:44:33.806340Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [6b2602db93009f92d4e46ce3746289ad07e294fdd5632f0cc9df5ca69568a037] <==
	{"level":"info","ts":"2026-01-10T09:43:54.150275Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T09:43:54.151374Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.227104Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.228743Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.229312Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.229836Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T09:43:54.229953Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T09:44:24.912772Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2026-01-10T09:44:24.912824Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-667994","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2026-01-10T09:44:24.912930Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T09:44:25.193068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T09:44:25.193156Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.193176Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193339Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193376Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T09:44:25.193403Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.193415Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2026-01-10T09:44:25.193436Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193662Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193687Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T09:44:25.193697Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.196700Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2026-01-10T09:44:25.196785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.196821Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T09:44:25.196839Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-667994","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 09:44:43 up  2:27,  0 user,  load average: 3.67, 2.43, 2.51
	Linux pause-667994 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f] <==
	I0110 09:44:33.462950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 09:44:33.483033       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 09:44:33.491854       1 main.go:148] setting mtu 1500 for CNI 
	I0110 09:44:33.499115       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 09:44:33.499442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T09:44:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 09:44:33.836150       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 09:44:33.836177       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 09:44:33.836186       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 09:44:33.839097       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 09:44:37.136914       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 09:44:37.137028       1 metrics.go:72] Registering metrics
	I0110 09:44:37.137111       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kindnet [a9ee5c4a9a8997f500d31311aaf7abec04fd144f6a38bf93ae0a1a7e06b8a4ec] <==
	I0110 09:44:08.920725       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 09:44:09.016701       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 09:44:09.016919       1 main.go:148] setting mtu 1500 for CNI 
	I0110 09:44:09.016964       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 09:44:09.017009       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T09:44:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 09:44:09.220096       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 09:44:09.220317       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 09:44:09.220360       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 09:44:09.220551       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 09:44:09.520677       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 09:44:09.520703       1 metrics.go:72] Registering metrics
	I0110 09:44:09.520770       1 controller.go:711] "Syncing nftables rules"
	I0110 09:44:19.220297       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 09:44:19.220370       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3137a4adba2b54df4dbcba37d5c02ee6e8385e299cea66c5a18a8c78c7530e30] <==
	W0110 09:44:24.940283       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943297       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943376       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943429       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943478       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943525       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943579       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943625       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.948832       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.948918       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949305       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949348       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949391       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949439       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949483       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949535       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949602       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949643       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949680       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949720       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949763       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949803       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949843       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949884       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949933       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1] <==
	I0110 09:44:36.803644       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0110 09:44:37.025485       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 09:44:37.042351       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.042455       1 policy_source.go:248] refreshing policies
	I0110 09:44:37.060984       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 09:44:37.070478       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 09:44:37.070865       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 09:44:37.071200       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 09:44:37.075634       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 09:44:37.075993       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.076081       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.076313       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.076724       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 09:44:37.076749       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0110 09:44:37.083953       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 09:44:37.089249       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 09:44:37.089355       1 aggregator.go:187] initial CRD sync complete...
	I0110 09:44:37.089372       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 09:44:37.089379       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 09:44:37.089384       1 cache.go:39] Caches are synced for autoregister controller
	I0110 09:44:37.091964       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 09:44:37.103398       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 09:44:37.103532       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 09:44:37.681364       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 09:44:38.911735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [3e198530592feda9a423d035b1ef29be2edbe94052f9194b7ca5370b54f3e119] <==
	I0110 09:44:04.574769       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574830       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574853       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574863       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574877       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574919       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.575173       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.575345       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577714       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577779       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577805       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577858       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.592684       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.603064       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.603857       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.604033       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.641299       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.641350       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.641384       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.656042       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.694972       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.694998       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 09:44:04.695004       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 09:44:04.709117       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:19.576537       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21] <==
	I0110 09:44:40.313959       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.313983       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314020       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314054       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314256       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314303       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 09:44:40.314333       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.313365       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314400       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314459       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314511       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314335       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 09:44:40.313966       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314949       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.315079       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.315122       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.313974       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314324       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314315       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.328790       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.328825       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.328924       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314377       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314392       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.351117       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [08ecb38cf4e5b37649293c433f10aa7f9823c2691ecdb51233eb8c3474936604] <==
	I0110 09:44:06.878696       1 server_linux.go:53] "Using iptables proxy"
	I0110 09:44:06.970020       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:44:07.070751       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:07.070784       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 09:44:07.070914       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 09:44:07.094678       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 09:44:07.094793       1 server_linux.go:136] "Using iptables Proxier"
	I0110 09:44:07.098463       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 09:44:07.098895       1 server.go:529] "Version info" version="v1.35.0"
	I0110 09:44:07.098948       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:44:07.101743       1 config.go:106] "Starting endpoint slice config controller"
	I0110 09:44:07.101813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 09:44:07.102147       1 config.go:200] "Starting service config controller"
	I0110 09:44:07.102194       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 09:44:07.102515       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 09:44:07.102558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 09:44:07.103053       1 config.go:309] "Starting node config controller"
	I0110 09:44:07.103099       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 09:44:07.103129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 09:44:07.202222       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 09:44:07.202413       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 09:44:07.203697       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5] <==
	I0110 09:44:35.395847       1 server_linux.go:53] "Using iptables proxy"
	I0110 09:44:35.674996       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:44:37.077167       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.077209       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 09:44:37.077301       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 09:44:37.105053       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 09:44:37.105115       1 server_linux.go:136] "Using iptables Proxier"
	I0110 09:44:37.109400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 09:44:37.109721       1 server.go:529] "Version info" version="v1.35.0"
	I0110 09:44:37.109740       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:44:37.111172       1 config.go:200] "Starting service config controller"
	I0110 09:44:37.111274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 09:44:37.111322       1 config.go:106] "Starting endpoint slice config controller"
	I0110 09:44:37.111350       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 09:44:37.111394       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 09:44:37.111429       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 09:44:37.112426       1 config.go:309] "Starting node config controller"
	I0110 09:44:37.112632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 09:44:37.112670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 09:44:37.212196       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 09:44:37.212205       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 09:44:37.212221       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c] <==
	I0110 09:44:35.115220       1 serving.go:386] Generated self-signed cert in-memory
	W0110 09:44:36.890627       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 09:44:36.890665       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 09:44:36.890676       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 09:44:36.890683       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 09:44:36.993695       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 09:44:36.993799       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:44:37.015527       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 09:44:37.015663       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 09:44:37.015681       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:44:37.024814       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 09:44:37.116751       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [b64a03dfccee29f0abd41c54d6eab5aad5bf378293778a8157db8c2e1453fdcb] <==
	E0110 09:43:57.571499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 09:43:57.571984       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 09:43:57.572438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 09:43:57.572763       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 09:43:57.572829       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 09:43:57.572866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 09:43:57.572903       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 09:43:57.572943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 09:43:57.572958       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 09:43:57.572979       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 09:43:58.389623       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 09:43:58.447519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 09:43:58.448977       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 09:43:58.459919       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 09:43:58.572747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 09:43:58.600810       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 09:43:58.610206       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 09:43:58.716670       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I0110 09:44:00.479468       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:24.896275       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0110 09:44:24.896539       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 09:44:24.913948       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0110 09:44:24.914015       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0110 09:44:24.914021       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0110 09:44:24.914038       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.783405    1297 reflector.go:204] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-667994\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.783545    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="58772b5c945bdc20530019da53aef575" pod="kube-system/kube-scheduler-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.884717    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="9120de934cbdbaf36f6dc53cd92c2c75" pod="kube-system/etcd-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.923813    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="c1422dbe241a22040fb2786e3acd6f43" pod="kube-system/kube-controller-manager-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.957790    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="99b43641a2dc17b6a09a264654c43906" pod="kube-system/kube-apiserver-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.963153    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-np729\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="1f9a69f0-2849-4a4b-8636-d8f9e1b0de26" pod="kube-system/kube-proxy-np729"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.964482    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-zflh7\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="598cecc0-71a3-42e3-939d-6d1fa94bf8d4" pod="kube-system/kindnet-zflh7"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.966027    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-k85wx\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="23fc4d7b-fe46-4d48-8fad-90d3528fa2bf" pod="kube-system/coredns-7d764666f9-k85wx"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.967204    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="58772b5c945bdc20530019da53aef575" pod="kube-system/kube-scheduler-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.968240    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="9120de934cbdbaf36f6dc53cd92c2c75" pod="kube-system/etcd-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.971868    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="c1422dbe241a22040fb2786e3acd6f43" pod="kube-system/kube-controller-manager-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.973543    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="99b43641a2dc17b6a09a264654c43906" pod="kube-system/kube-apiserver-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.995740    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-np729\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="1f9a69f0-2849-4a4b-8636-d8f9e1b0de26" pod="kube-system/kube-proxy-np729"
	Jan 10 09:44:37 pause-667994 kubelet[1297]: E0110 09:44:37.012883    1297 status_manager.go:1045] "Failed to get status for pod" err=<
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         pods "kindnet-zflh7" is forbidden: User "system:node:pause-667994" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-667994' and this object
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Jan 10 09:44:37 pause-667994 kubelet[1297]:  > podUID="598cecc0-71a3-42e3-939d-6d1fa94bf8d4" pod="kube-system/kindnet-zflh7"
	Jan 10 09:44:37 pause-667994 kubelet[1297]: E0110 09:44:37.014783    1297 status_manager.go:1045] "Failed to get status for pod" err=<
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         pods "coredns-7d764666f9-k85wx" is forbidden: User "system:node:pause-667994" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-667994' and this object
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Jan 10 09:44:37 pause-667994 kubelet[1297]:  > podUID="23fc4d7b-fe46-4d48-8fad-90d3528fa2bf" pod="kube-system/coredns-7d764666f9-k85wx"
	Jan 10 09:44:37 pause-667994 kubelet[1297]: E0110 09:44:37.628241    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-667994" containerName="kube-apiserver"
	Jan 10 09:44:40 pause-667994 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 09:44:40 pause-667994 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 09:44:40 pause-667994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-667994 -n pause-667994
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-667994 -n pause-667994: exit status 2 (551.359977ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-667994 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-667994
helpers_test.go:244: (dbg) docker inspect pause-667994:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4",
	        "Created": "2026-01-10T09:43:38.380700561Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 434235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T09:43:38.533344573Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/hosts",
	        "LogPath": "/var/lib/docker/containers/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4/49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4-json.log",
	        "Name": "/pause-667994",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-667994:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-667994",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49ea7664f096a003a4c7a13079ce6cbaea040127886ca5c0e195b1658a17d2d4",
	                "LowerDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d282b19c42149e59b8ba4bdaa13ff6428ef0a4321a240cada956ea9cba54616f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-667994",
	                "Source": "/var/lib/docker/volumes/pause-667994/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-667994",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-667994",
	                "name.minikube.sigs.k8s.io": "pause-667994",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67f3ea46b1f1e1135407b8c43282379648175e923d2565637124ec9489ff8efb",
	            "SandboxKey": "/var/run/docker/netns/67f3ea46b1f1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33338"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-667994": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:96:fe:98:02:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91bc10798e2b8f8d700f1fb87405c7bb915a3111bec2f90fb0f7d1eac54c1f8b",
	                    "EndpointID": "6d7d1a755edbdbc2b13e8b6c43f01e5ba27a2c86d6bd6e47e9b10b8e6b702ed8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-667994",
	                        "49ea7664f096"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-667994 -n pause-667994
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-667994 -n pause-667994: exit status 2 (408.159398ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-667994 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-667994 logs -n 25: (1.459625558s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-885817                                                                                         │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │                     │
	│ start   │ -p multinode-885817-m02 --driver=docker  --container-runtime=crio                                                │ multinode-885817-m02        │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │                     │
	│ start   │ -p multinode-885817-m03 --driver=docker  --container-runtime=crio                                                │ multinode-885817-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:41 UTC │
	│ node    │ add -p multinode-885817                                                                                          │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │                     │
	│ delete  │ -p multinode-885817-m03                                                                                          │ multinode-885817-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:41 UTC │
	│ delete  │ -p multinode-885817                                                                                              │ multinode-885817            │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:41 UTC │
	│ start   │ -p scheduled-stop-472417 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:41 UTC │ 10 Jan 26 09:42 UTC │
	│ stop    │ -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --cancel-scheduled                                                                      │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │ 10 Jan 26 09:42 UTC │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │                     │
	│ stop    │ -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:42 UTC │ 10 Jan 26 09:42 UTC │
	│ delete  │ -p scheduled-stop-472417                                                                                         │ scheduled-stop-472417       │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:43 UTC │
	│ start   │ -p insufficient-storage-104326 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-104326 │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │                     │
	│ delete  │ -p insufficient-storage-104326                                                                                   │ insufficient-storage-104326 │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:43 UTC │
	│ start   │ -p pause-667994 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-667994                │ jenkins │ v1.37.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:44 UTC │
	│ start   │ -p missing-upgrade-191186 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-191186      │ jenkins │ v1.35.0 │ 10 Jan 26 09:43 UTC │ 10 Jan 26 09:44 UTC │
	│ start   │ -p pause-667994 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-667994                │ jenkins │ v1.37.0 │ 10 Jan 26 09:44 UTC │ 10 Jan 26 09:44 UTC │
	│ pause   │ -p pause-667994 --alsologtostderr -v=5                                                                           │ pause-667994                │ jenkins │ v1.37.0 │ 10 Jan 26 09:44 UTC │                     │
	│ start   │ -p missing-upgrade-191186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-191186      │ jenkins │ v1.37.0 │ 10 Jan 26 09:44 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:44:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:44:44.046629  440387 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:44:44.046798  440387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:44:44.046807  440387 out.go:374] Setting ErrFile to fd 2...
	I0110 09:44:44.046813  440387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:44:44.047119  440387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:44:44.047496  440387 out.go:368] Setting JSON to false
	I0110 09:44:44.048336  440387 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8833,"bootTime":1768029451,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:44:44.048401  440387 start.go:143] virtualization:  
	I0110 09:44:44.054040  440387 out.go:179] * [missing-upgrade-191186] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:44:44.057845  440387 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:44:44.058016  440387 notify.go:221] Checking for updates...
	I0110 09:44:44.063552  440387 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:44:44.066404  440387 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:44:44.069186  440387 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:44:44.072091  440387 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:44:44.074866  440387 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:44:44.078682  440387 config.go:182] Loaded profile config "missing-upgrade-191186": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 09:44:44.082125  440387 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 09:44:44.084823  440387 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:44:44.144200  440387 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:44:44.144304  440387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:44:44.219108  440387 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:44:44.20901129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:44:44.219213  440387 docker.go:319] overlay module found
	I0110 09:44:44.222578  440387 out.go:179] * Using the docker driver based on existing profile
	I0110 09:44:44.225294  440387 start.go:309] selected driver: docker
	I0110 09:44:44.225313  440387 start.go:928] validating driver "docker" against &{Name:missing-upgrade-191186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-191186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:44:44.225400  440387 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:44:44.226133  440387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:44:44.303724  440387 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:44:44.293420615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:44:44.304014  440387 cni.go:84] Creating CNI manager for ""
	I0110 09:44:44.304075  440387 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:44:44.304117  440387 start.go:353] cluster config:
	{Name:missing-upgrade-191186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-191186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:44:44.312576  440387 out.go:179] * Starting "missing-upgrade-191186" primary control-plane node in "missing-upgrade-191186" cluster
	I0110 09:44:44.315388  440387 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:44:44.318527  440387 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:44:44.322482  440387 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0110 09:44:44.322530  440387 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:44:44.322539  440387 cache.go:65] Caching tarball of preloaded images
	I0110 09:44:44.322624  440387 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 09:44:44.322635  440387 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0110 09:44:44.322748  440387 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/missing-upgrade-191186/config.json ...
	I0110 09:44:44.322967  440387 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0110 09:44:44.352848  440387 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0110 09:44:44.352874  440387 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0110 09:44:44.352889  440387 cache.go:243] Successfully downloaded all kic artifacts
	I0110 09:44:44.352921  440387 start.go:360] acquireMachinesLock for missing-upgrade-191186: {Name:mk75bf117085e1ffcdaf1dfe4a0dceed1f3537d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 09:44:44.352978  440387 start.go:364] duration metric: took 35.783µs to acquireMachinesLock for "missing-upgrade-191186"
	I0110 09:44:44.353002  440387 start.go:96] Skipping create...Using existing machine configuration
	I0110 09:44:44.353012  440387 fix.go:54] fixHost starting: 
	I0110 09:44:44.353275  440387 cli_runner.go:164] Run: docker container inspect missing-upgrade-191186 --format={{.State.Status}}
	W0110 09:44:44.369338  440387 cli_runner.go:211] docker container inspect missing-upgrade-191186 --format={{.State.Status}} returned with exit code 1
	I0110 09:44:44.369399  440387 fix.go:112] recreateIfNeeded on missing-upgrade-191186: state= err=unknown state "missing-upgrade-191186": docker container inspect missing-upgrade-191186 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-191186
	I0110 09:44:44.369430  440387 fix.go:117] machineExists: false. err=machine does not exist
	I0110 09:44:44.372626  440387 out.go:179] * docker "missing-upgrade-191186" container is missing, will recreate.
	
	
	==> CRI-O <==
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.204388117Z" level=info msg="Created container 68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f: kube-system/kindnet-zflh7/kindnet-cni" id=6eeee982-164c-4785-b1a4-3d313190a033 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.233132329Z" level=info msg="Started container" PID=2201 containerID=53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21 description=kube-system/kube-controller-manager-pause-667994/kube-controller-manager id=8e25465a-ac19-4bfe-aa34-297b6a04c5d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf399b5021bc6d3182dbc4c2da04758339777866059ca813bc2ac17b084758c3
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.23955148Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.243295267Z" level=info msg="Starting container: 68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f" id=95acd06c-f871-4d4a-8be2-71767f2bb815 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.243806807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.261338375Z" level=info msg="Started container" PID=2195 containerID=ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1 description=kube-system/kube-apiserver-pause-667994/kube-apiserver id=5934b3ab-401c-4a58-bccd-adc14f87a33c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac885fdc2bdf47ffa7b3b7637cfa57a748ea2b171aab4ae90e4a054910c2c02a
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.277628356Z" level=info msg="Started container" PID=2213 containerID=68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f description=kube-system/kindnet-zflh7/kindnet-cni id=95acd06c-f871-4d4a-8be2-71767f2bb815 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2b44661d7c2b8767311d7161e035d937e5cc485bbec332bde3afd7eb76d21d3a
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.335456421Z" level=info msg="Created container 9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c: kube-system/kube-scheduler-pause-667994/kube-scheduler" id=4a049ea9-0507-419b-8458-9576b1c34b23 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.337432697Z" level=info msg="Starting container: 9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c" id=fdaf268a-8dbb-47cf-82f1-cd77326dfcb1 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.346948347Z" level=info msg="Created container 21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4: kube-system/coredns-7d764666f9-k85wx/coredns" id=5be89a7b-5dcc-438c-98e6-207ce147a696 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.35074948Z" level=info msg="Starting container: 21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4" id=e2bdb6fd-9a3d-4843-9f92-825d6333fbdd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.352772806Z" level=info msg="Started container" PID=2254 containerID=9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c description=kube-system/kube-scheduler-pause-667994/kube-scheduler id=fdaf268a-8dbb-47cf-82f1-cd77326dfcb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f435bcb1c7302e13a50c964f210b7ee88386e4283069a14220c2d578e1e7a986
	Jan 10 09:44:33 pause-667994 crio[2101]: time="2026-01-10T09:44:33.354951211Z" level=info msg="Started container" PID=2242 containerID=21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4 description=kube-system/coredns-7d764666f9-k85wx/coredns id=e2bdb6fd-9a3d-4843-9f92-825d6333fbdd name=/runtime.v1.RuntimeService/StartContainer sandboxID=e35532fd21f54ac7df4cb44f0da8e222ff754574cdd0062aae9c8225e9401941
	Jan 10 09:44:34 pause-667994 crio[2101]: time="2026-01-10T09:44:34.025367235Z" level=info msg="Created container cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5: kube-system/kube-proxy-np729/kube-proxy" id=dd27e74d-3608-4bdd-8ed0-f79a3a817398 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 09:44:34 pause-667994 crio[2101]: time="2026-01-10T09:44:34.029072703Z" level=info msg="Starting container: cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5" id=0613a482-8f0a-4bab-8bd9-00f3a55b2ae1 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 09:44:34 pause-667994 crio[2101]: time="2026-01-10T09:44:34.032135938Z" level=info msg="Started container" PID=2241 containerID=cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5 description=kube-system/kube-proxy-np729/kube-proxy id=0613a482-8f0a-4bab-8bd9-00f3a55b2ae1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ef9575200440a488bc28644a3c652716aa5c866e9cafeace9834ec1355285ca
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.849298541Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.849337179Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.855709371Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.855870506Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.866226652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.866377292Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.866467517Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.873803466Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 09:44:43 pause-667994 crio[2101]: time="2026-01-10T09:44:43.873962705Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	9a2e759498c31       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     12 seconds ago      Running             kube-scheduler            1                   f435bcb1c7302       kube-scheduler-pause-667994            kube-system
	cbd52ff43c844       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     12 seconds ago      Running             kube-proxy                1                   0ef9575200440       kube-proxy-np729                       kube-system
	21e6fbd88cb50       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     12 seconds ago      Running             coredns                   1                   e35532fd21f54       coredns-7d764666f9-k85wx               kube-system
	68fb1a0dce1c9       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     12 seconds ago      Running             kindnet-cni               1                   2b44661d7c2b8       kindnet-zflh7                          kube-system
	52a15e29b6810       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     12 seconds ago      Running             etcd                      1                   03799d5b5fb3c       etcd-pause-667994                      kube-system
	53f6564b9ab16       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     12 seconds ago      Running             kube-controller-manager   1                   cf399b5021bc6       kube-controller-manager-pause-667994   kube-system
	ca7b22c044279       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     12 seconds ago      Running             kube-apiserver            1                   ac885fdc2bdf4       kube-apiserver-pause-667994            kube-system
	0784e58579d80       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     25 seconds ago      Exited              coredns                   0                   e35532fd21f54       coredns-7d764666f9-k85wx               kube-system
	a9ee5c4a9a899       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   36 seconds ago      Exited              kindnet-cni               0                   2b44661d7c2b8       kindnet-zflh7                          kube-system
	08ecb38cf4e5b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     38 seconds ago      Exited              kube-proxy                0                   0ef9575200440       kube-proxy-np729                       kube-system
	3137a4adba2b5       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     52 seconds ago      Exited              kube-apiserver            0                   ac885fdc2bdf4       kube-apiserver-pause-667994            kube-system
	b64a03dfccee2       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     52 seconds ago      Exited              kube-scheduler            0                   f435bcb1c7302       kube-scheduler-pause-667994            kube-system
	3e198530592fe       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     52 seconds ago      Exited              kube-controller-manager   0                   cf399b5021bc6       kube-controller-manager-pause-667994   kube-system
	6b2602db93009       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     52 seconds ago      Exited              etcd                      0                   03799d5b5fb3c       etcd-pause-667994                      kube-system
	
	
	==> coredns [0784e58579d80d5fdf9ddd218fcc3557f470cd5dafeef80fe3b62c323d467f92] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42133 - 19475 "HINFO IN 6944866512423796572.2950916723408314853. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.052145536s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [21e6fbd88cb50fb278f571a2da3cc9c7fb185abfe500a709cb28f4a8b5c433a4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45793 - 55333 "HINFO IN 8773195563947633132.7269882810796692473. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010811563s
	
	
	==> describe nodes <==
	Name:               pause-667994
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-667994
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=pause-667994
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T09_44_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 09:43:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-667994
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 09:44:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:43:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:43:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:43:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 09:44:19 +0000   Sat, 10 Jan 2026 09:44:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-667994
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                bb4a1973-b020-4b0d-a4b8-c5a69bdeb681
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-k85wx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     40s
	  kube-system                 etcd-pause-667994                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-zflh7                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      39s
	  kube-system                 kube-apiserver-pause-667994             250m (12%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-pause-667994    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-np729                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-scheduler-pause-667994             100m (5%)     0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  41s   node-controller  Node pause-667994 event: Registered Node pause-667994 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node pause-667994 event: Registered Node pause-667994 in Controller
	
	
	==> dmesg <==
	[Jan10 09:22] overlayfs: idmapped layers are currently not supported
	[Jan10 09:23] overlayfs: idmapped layers are currently not supported
	[Jan10 09:24] overlayfs: idmapped layers are currently not supported
	[Jan10 09:25] overlayfs: idmapped layers are currently not supported
	[  +3.457822] overlayfs: idmapped layers are currently not supported
	[Jan10 09:27] overlayfs: idmapped layers are currently not supported
	[ +38.319069] overlayfs: idmapped layers are currently not supported
	[Jan10 09:28] overlayfs: idmapped layers are currently not supported
	[  +3.010233] overlayfs: idmapped layers are currently not supported
	[Jan10 09:29] overlayfs: idmapped layers are currently not supported
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52a15e29b6810213581004202f307660af088a92dad0ff8891c6255bfd4a109c] <==
	{"level":"info","ts":"2026-01-10T09:44:33.625189Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T09:44:33.647335Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T09:44:33.647404Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T09:44:33.625254Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T09:44:33.709041Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T09:44:33.625547Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T09:44:33.681319Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T09:44:33.681390Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T09:44:33.709982Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T09:44:33.710083Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T09:44:33.710169Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T09:44:33.710221Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.712535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.712604Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T09:44:33.712651Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.712706Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T09:44:33.744802Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-667994 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T09:44:33.744924Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T09:44:33.745176Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T09:44:33.746061Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T09:44:33.748172Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T09:44:33.748300Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T09:44:33.748349Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T09:44:33.769312Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T09:44:33.806340Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> etcd [6b2602db93009f92d4e46ce3746289ad07e294fdd5632f0cc9df5ca69568a037] <==
	{"level":"info","ts":"2026-01-10T09:43:54.150275Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T09:43:54.151374Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.227104Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.228743Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.229312Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T09:43:54.229836Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T09:43:54.229953Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T09:44:24.912772Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2026-01-10T09:44:24.912824Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-667994","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2026-01-10T09:44:24.912930Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T09:44:25.193068Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T09:44:25.193156Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.193176Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193339Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193376Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T09:44:25.193403Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.193415Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2026-01-10T09:44:25.193436Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193662Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T09:44:25.193687Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T09:44:25.193697Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.196700Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2026-01-10T09:44:25.196785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T09:44:25.196821Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T09:44:25.196839Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-667994","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 09:44:45 up  2:27,  0 user,  load average: 3.54, 2.42, 2.51
	Linux pause-667994 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [68fb1a0dce1c92de2d0836e3dd0481cccd9c2b968207d30be34d1f7ee2fde43f] <==
	I0110 09:44:33.462950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 09:44:33.483033       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 09:44:33.491854       1 main.go:148] setting mtu 1500 for CNI 
	I0110 09:44:33.499115       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 09:44:33.499442       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T09:44:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 09:44:33.836150       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 09:44:33.836177       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 09:44:33.836186       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 09:44:33.839097       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 09:44:37.136914       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 09:44:37.137028       1 metrics.go:72] Registering metrics
	I0110 09:44:37.137111       1 controller.go:711] "Syncing nftables rules"
	I0110 09:44:43.840592       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 09:44:43.840642       1 main.go:301] handling current node
	
	
	==> kindnet [a9ee5c4a9a8997f500d31311aaf7abec04fd144f6a38bf93ae0a1a7e06b8a4ec] <==
	I0110 09:44:08.920725       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 09:44:09.016701       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 09:44:09.016919       1 main.go:148] setting mtu 1500 for CNI 
	I0110 09:44:09.016964       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 09:44:09.017009       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T09:44:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 09:44:09.220096       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 09:44:09.220317       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 09:44:09.220360       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 09:44:09.220551       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 09:44:09.520677       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 09:44:09.520703       1 metrics.go:72] Registering metrics
	I0110 09:44:09.520770       1 controller.go:711] "Syncing nftables rules"
	I0110 09:44:19.220297       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 09:44:19.220370       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3137a4adba2b54df4dbcba37d5c02ee6e8385e299cea66c5a18a8c78c7530e30] <==
	W0110 09:44:24.940283       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943297       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943376       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943429       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943478       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943525       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943579       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.943625       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.948832       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.948918       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949305       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949348       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949391       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949439       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949483       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949535       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949602       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949643       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949680       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949720       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949763       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949803       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949843       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949884       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 09:44:24.949933       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ca7b22c04427907811f0cdfff05f6eb66fb79acba12c23d12611c3a16d4a5ea1] <==
	I0110 09:44:36.803644       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0110 09:44:37.025485       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 09:44:37.042351       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.042455       1 policy_source.go:248] refreshing policies
	I0110 09:44:37.060984       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 09:44:37.070478       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 09:44:37.070865       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 09:44:37.071200       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 09:44:37.075634       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 09:44:37.075993       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.076081       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.076313       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.076724       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 09:44:37.076749       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0110 09:44:37.083953       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 09:44:37.089249       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 09:44:37.089355       1 aggregator.go:187] initial CRD sync complete...
	I0110 09:44:37.089372       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 09:44:37.089379       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 09:44:37.089384       1 cache.go:39] Caches are synced for autoregister controller
	I0110 09:44:37.091964       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 09:44:37.103398       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 09:44:37.103532       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 09:44:37.681364       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 09:44:38.911735       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	
	
	==> kube-controller-manager [3e198530592feda9a423d035b1ef29be2edbe94052f9194b7ca5370b54f3e119] <==
	I0110 09:44:04.574769       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574830       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574853       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574863       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574877       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.574919       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.575173       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.575345       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577714       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577779       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577805       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.577858       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.592684       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.603064       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.603857       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.604033       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.641299       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.641350       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.641384       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.656042       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.694972       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:04.694998       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 09:44:04.695004       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 09:44:04.709117       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:19.576537       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-controller-manager [53f6564b9ab169e7bc731d93eaab979c3c9833109ed4642b5198d8e526714f21] <==
	I0110 09:44:40.313959       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.313983       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314020       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314054       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314256       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314303       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 09:44:40.314333       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.313365       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314400       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314459       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314511       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314335       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 09:44:40.313966       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314949       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.315079       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.315122       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.313974       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314324       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314315       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.328790       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.328825       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.328924       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314377       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.314392       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:40.351117       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [08ecb38cf4e5b37649293c433f10aa7f9823c2691ecdb51233eb8c3474936604] <==
	I0110 09:44:06.878696       1 server_linux.go:53] "Using iptables proxy"
	I0110 09:44:06.970020       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:44:07.070751       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:07.070784       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 09:44:07.070914       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 09:44:07.094678       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 09:44:07.094793       1 server_linux.go:136] "Using iptables Proxier"
	I0110 09:44:07.098463       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 09:44:07.098895       1 server.go:529] "Version info" version="v1.35.0"
	I0110 09:44:07.098948       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:44:07.101743       1 config.go:106] "Starting endpoint slice config controller"
	I0110 09:44:07.101813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 09:44:07.102147       1 config.go:200] "Starting service config controller"
	I0110 09:44:07.102194       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 09:44:07.102515       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 09:44:07.102558       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 09:44:07.103053       1 config.go:309] "Starting node config controller"
	I0110 09:44:07.103099       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 09:44:07.103129       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 09:44:07.202222       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 09:44:07.202413       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 09:44:07.203697       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [cbd52ff43c844df634dbe635949db4bf41702c9de7aceca70c74fb1f26361fc5] <==
	I0110 09:44:35.395847       1 server_linux.go:53] "Using iptables proxy"
	I0110 09:44:35.674996       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:44:37.077167       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:37.077209       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 09:44:37.077301       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 09:44:37.105053       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 09:44:37.105115       1 server_linux.go:136] "Using iptables Proxier"
	I0110 09:44:37.109400       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 09:44:37.109721       1 server.go:529] "Version info" version="v1.35.0"
	I0110 09:44:37.109740       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:44:37.111172       1 config.go:200] "Starting service config controller"
	I0110 09:44:37.111274       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 09:44:37.111322       1 config.go:106] "Starting endpoint slice config controller"
	I0110 09:44:37.111350       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 09:44:37.111394       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 09:44:37.111429       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 09:44:37.112426       1 config.go:309] "Starting node config controller"
	I0110 09:44:37.112632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 09:44:37.112670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 09:44:37.212196       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 09:44:37.212205       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 09:44:37.212221       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a2e759498c3158d1d040dcec731958de65c1d6ec72ac4900a55d3e070c1045c] <==
	I0110 09:44:35.115220       1 serving.go:386] Generated self-signed cert in-memory
	W0110 09:44:36.890627       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 09:44:36.890665       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 09:44:36.890676       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 09:44:36.890683       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 09:44:36.993695       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 09:44:36.993799       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 09:44:37.015527       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 09:44:37.015663       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 09:44:37.015681       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 09:44:37.024814       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 09:44:37.116751       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [b64a03dfccee29f0abd41c54d6eab5aad5bf378293778a8157db8c2e1453fdcb] <==
	E0110 09:43:57.571499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 09:43:57.571984       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 09:43:57.572438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 09:43:57.572763       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 09:43:57.572829       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 09:43:57.572866       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 09:43:57.572903       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 09:43:57.572943       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 09:43:57.572958       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 09:43:57.572979       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 09:43:58.389623       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 09:43:58.447519       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 09:43:58.448977       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 09:43:58.459919       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 09:43:58.572747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 09:43:58.600810       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 09:43:58.610206       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 09:43:58.716670       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I0110 09:44:00.479468       1 shared_informer.go:377] "Caches are synced"
	I0110 09:44:24.896275       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0110 09:44:24.896539       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 09:44:24.913948       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0110 09:44:24.914015       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0110 09:44:24.914021       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0110 09:44:24.914038       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.783405    1297 reflector.go:204] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-667994\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.783545    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="58772b5c945bdc20530019da53aef575" pod="kube-system/kube-scheduler-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.884717    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="9120de934cbdbaf36f6dc53cd92c2c75" pod="kube-system/etcd-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.923813    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="c1422dbe241a22040fb2786e3acd6f43" pod="kube-system/kube-controller-manager-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.957790    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="99b43641a2dc17b6a09a264654c43906" pod="kube-system/kube-apiserver-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.963153    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-np729\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="1f9a69f0-2849-4a4b-8636-d8f9e1b0de26" pod="kube-system/kube-proxy-np729"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.964482    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-zflh7\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="598cecc0-71a3-42e3-939d-6d1fa94bf8d4" pod="kube-system/kindnet-zflh7"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.966027    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-k85wx\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="23fc4d7b-fe46-4d48-8fad-90d3528fa2bf" pod="kube-system/coredns-7d764666f9-k85wx"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.967204    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="58772b5c945bdc20530019da53aef575" pod="kube-system/kube-scheduler-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.968240    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="9120de934cbdbaf36f6dc53cd92c2c75" pod="kube-system/etcd-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.971868    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="c1422dbe241a22040fb2786e3acd6f43" pod="kube-system/kube-controller-manager-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.973543    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-667994\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="99b43641a2dc17b6a09a264654c43906" pod="kube-system/kube-apiserver-pause-667994"
	Jan 10 09:44:36 pause-667994 kubelet[1297]: E0110 09:44:36.995740    1297 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-np729\" is forbidden: User \"system:node:pause-667994\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-667994' and this object" podUID="1f9a69f0-2849-4a4b-8636-d8f9e1b0de26" pod="kube-system/kube-proxy-np729"
	Jan 10 09:44:37 pause-667994 kubelet[1297]: E0110 09:44:37.012883    1297 status_manager.go:1045] "Failed to get status for pod" err=<
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         pods "kindnet-zflh7" is forbidden: User "system:node:pause-667994" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-667994' and this object
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	Jan 10 09:44:37 pause-667994 kubelet[1297]:  > podUID="598cecc0-71a3-42e3-939d-6d1fa94bf8d4" pod="kube-system/kindnet-zflh7"
	Jan 10 09:44:37 pause-667994 kubelet[1297]: E0110 09:44:37.014783    1297 status_manager.go:1045] "Failed to get status for pod" err=<
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         pods "coredns-7d764666f9-k85wx" is forbidden: User "system:node:pause-667994" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-667994' and this object
	Jan 10 09:44:37 pause-667994 kubelet[1297]:         RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found]
	Jan 10 09:44:37 pause-667994 kubelet[1297]:  > podUID="23fc4d7b-fe46-4d48-8fad-90d3528fa2bf" pod="kube-system/coredns-7d764666f9-k85wx"
	Jan 10 09:44:37 pause-667994 kubelet[1297]: E0110 09:44:37.628241    1297 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-667994" containerName="kube-apiserver"
	Jan 10 09:44:40 pause-667994 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 09:44:40 pause-667994 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 09:44:40 pause-667994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-667994 -n pause-667994
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-667994 -n pause-667994: exit status 2 (357.982076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-667994 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (257.203087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:01:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-729486 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-729486 describe deploy/metrics-server -n kube-system: exit status 1 (122.303354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-729486 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-729486
helpers_test.go:244: (dbg) docker inspect old-k8s-version-729486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a",
	        "Created": "2026-01-10T10:00:53.623819553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 497887,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:00:53.685187723Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/hosts",
	        "LogPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a-json.log",
	        "Name": "/old-k8s-version-729486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-729486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-729486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a",
	                "LowerDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-729486",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-729486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-729486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-729486",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-729486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a21ba436540510cc2ec8c97965ee8790914a5769ae0aceea140aa272262afaaf",
	            "SandboxKey": "/var/run/docker/netns/a21ba4365405",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-729486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:f3:97:b4:bc:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2fc70c7426464ff9890052e4156c669ae44556450aab6cdc6b7787e2fd7c393f",
	                    "EndpointID": "c70c60364f165995bb714439d13a2b486a2fec864a218426ffba3ab299830428",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-729486",
	                        "e3db4a48fc4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-729486 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-729486 logs -n 25: (1.18114203s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-255897 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo containerd config dump                                                                                                                                                                                                  │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo crio config                                                                                                                                                                                                             │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ delete  │ -p cilium-255897                                                                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:00:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:00:47.617488  497460 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:00:47.617602  497460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:00:47.617611  497460 out.go:374] Setting ErrFile to fd 2...
	I0110 10:00:47.617617  497460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:00:47.617871  497460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:00:47.618294  497460 out.go:368] Setting JSON to false
	I0110 10:00:47.619096  497460 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9797,"bootTime":1768029451,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:00:47.619156  497460 start.go:143] virtualization:  
	I0110 10:00:47.622586  497460 out.go:179] * [old-k8s-version-729486] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:00:47.627019  497460 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:00:47.627070  497460 notify.go:221] Checking for updates...
	I0110 10:00:47.635362  497460 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:00:47.638484  497460 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:00:47.641737  497460 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:00:47.644817  497460 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:00:47.647888  497460 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:00:47.651492  497460 config.go:182] Loaded profile config "force-systemd-flag-524845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:00:47.651646  497460 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:00:47.682113  497460 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:00:47.682222  497460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:00:47.741699  497460 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:00:47.732638443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:00:47.741809  497460 docker.go:319] overlay module found
	I0110 10:00:47.745128  497460 out.go:179] * Using the docker driver based on user configuration
	I0110 10:00:47.748180  497460 start.go:309] selected driver: docker
	I0110 10:00:47.748199  497460 start.go:928] validating driver "docker" against <nil>
	I0110 10:00:47.748214  497460 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:00:47.748970  497460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:00:47.806534  497460 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:00:47.797480916 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:00:47.806694  497460 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:00:47.806908  497460 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:00:47.809829  497460 out.go:179] * Using Docker driver with root privileges
	I0110 10:00:47.812773  497460 cni.go:84] Creating CNI manager for ""
	I0110 10:00:47.812844  497460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:00:47.812856  497460 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:00:47.812942  497460 start.go:353] cluster config:
	{Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:00:47.817832  497460 out.go:179] * Starting "old-k8s-version-729486" primary control-plane node in "old-k8s-version-729486" cluster
	I0110 10:00:47.820594  497460 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:00:47.823504  497460 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:00:47.826288  497460 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:00:47.826346  497460 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:00:47.826359  497460 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:00:47.826367  497460 cache.go:65] Caching tarball of preloaded images
	I0110 10:00:47.826444  497460 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:00:47.826454  497460 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 10:00:47.826558  497460 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json ...
	I0110 10:00:47.826575  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json: {Name:mk97cc967364dc444fa5b515e10c2852c34cd31c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:00:47.855647  497460 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:00:47.855670  497460 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:00:47.855685  497460 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:00:47.855721  497460 start.go:360] acquireMachinesLock for old-k8s-version-729486: {Name:mk0f30d4f7ea165498ccd896959105635842f094 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:00:47.855832  497460 start.go:364] duration metric: took 90.602µs to acquireMachinesLock for "old-k8s-version-729486"
	I0110 10:00:47.855862  497460 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:00:47.855931  497460 start.go:125] createHost starting for "" (driver="docker")
	I0110 10:00:47.859330  497460 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:00:47.859567  497460 start.go:159] libmachine.API.Create for "old-k8s-version-729486" (driver="docker")
	I0110 10:00:47.859594  497460 client.go:173] LocalClient.Create starting
	I0110 10:00:47.859655  497460 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:00:47.859700  497460 main.go:144] libmachine: Decoding PEM data...
	I0110 10:00:47.859720  497460 main.go:144] libmachine: Parsing certificate...
	I0110 10:00:47.859774  497460 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:00:47.859796  497460 main.go:144] libmachine: Decoding PEM data...
	I0110 10:00:47.859815  497460 main.go:144] libmachine: Parsing certificate...
	I0110 10:00:47.860194  497460 cli_runner.go:164] Run: docker network inspect old-k8s-version-729486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:00:47.875974  497460 cli_runner.go:211] docker network inspect old-k8s-version-729486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:00:47.876176  497460 network_create.go:284] running [docker network inspect old-k8s-version-729486] to gather additional debugging logs...
	I0110 10:00:47.876199  497460 cli_runner.go:164] Run: docker network inspect old-k8s-version-729486
	W0110 10:00:47.891596  497460 cli_runner.go:211] docker network inspect old-k8s-version-729486 returned with exit code 1
	I0110 10:00:47.891629  497460 network_create.go:287] error running [docker network inspect old-k8s-version-729486]: docker network inspect old-k8s-version-729486: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-729486 not found
	I0110 10:00:47.891644  497460 network_create.go:289] output of [docker network inspect old-k8s-version-729486]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-729486 not found
	
	** /stderr **
	I0110 10:00:47.891743  497460 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:00:47.919939  497460 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:00:47.920323  497460 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:00:47.921047  497460 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:00:47.921490  497460 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193fd40}
	I0110 10:00:47.921517  497460 network_create.go:124] attempt to create docker network old-k8s-version-729486 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 10:00:47.921569  497460 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-729486 old-k8s-version-729486
	I0110 10:00:47.984982  497460 network_create.go:108] docker network old-k8s-version-729486 192.168.76.0/24 created
	I0110 10:00:47.985011  497460 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-729486" container
	I0110 10:00:47.985100  497460 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:00:48.005094  497460 cli_runner.go:164] Run: docker volume create old-k8s-version-729486 --label name.minikube.sigs.k8s.io=old-k8s-version-729486 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:00:48.034538  497460 oci.go:103] Successfully created a docker volume old-k8s-version-729486
	I0110 10:00:48.034628  497460 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-729486-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-729486 --entrypoint /usr/bin/test -v old-k8s-version-729486:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:00:48.561054  497460 oci.go:107] Successfully prepared a docker volume old-k8s-version-729486
	I0110 10:00:48.561119  497460 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:00:48.561131  497460 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:00:48.561209  497460 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-729486:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:00:53.547661  497460 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-729486:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.986412457s)
	I0110 10:00:53.547702  497460 kic.go:203] duration metric: took 4.986567585s to extract preloaded images to volume ...
	W0110 10:00:53.547834  497460 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:00:53.547953  497460 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:00:53.608227  497460 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-729486 --name old-k8s-version-729486 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-729486 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-729486 --network old-k8s-version-729486 --ip 192.168.76.2 --volume old-k8s-version-729486:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 10:00:53.875261  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Running}}
	I0110 10:00:53.903981  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:00:53.928681  497460 cli_runner.go:164] Run: docker exec old-k8s-version-729486 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:00:53.987895  497460 oci.go:144] the created container "old-k8s-version-729486" has a running status.
	I0110 10:00:53.987932  497460 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa...
	I0110 10:00:54.189101  497460 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:00:54.213924  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:00:54.240861  497460 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:00:54.240887  497460 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-729486 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:00:54.322107  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:00:54.340979  497460 machine.go:94] provisionDockerMachine start ...
	I0110 10:00:54.341064  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:54.362425  497460 main.go:144] libmachine: Using SSH client type: native
	I0110 10:00:54.362847  497460 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0110 10:00:54.362862  497460 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:00:54.363652  497460 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:00:57.512281  497460 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-729486
	
	I0110 10:00:57.512306  497460 ubuntu.go:182] provisioning hostname "old-k8s-version-729486"
	I0110 10:00:57.512370  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:57.529326  497460 main.go:144] libmachine: Using SSH client type: native
	I0110 10:00:57.529641  497460 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0110 10:00:57.529660  497460 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-729486 && echo "old-k8s-version-729486" | sudo tee /etc/hostname
	I0110 10:00:57.690409  497460 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-729486
	
	I0110 10:00:57.690481  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:57.712839  497460 main.go:144] libmachine: Using SSH client type: native
	I0110 10:00:57.713153  497460 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0110 10:00:57.713174  497460 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-729486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-729486/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-729486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:00:57.861343  497460 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:00:57.861369  497460 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:00:57.861387  497460 ubuntu.go:190] setting up certificates
	I0110 10:00:57.861398  497460 provision.go:84] configureAuth start
	I0110 10:00:57.861468  497460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:00:57.879724  497460 provision.go:143] copyHostCerts
	I0110 10:00:57.879801  497460 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:00:57.879811  497460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:00:57.879896  497460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:00:57.879999  497460 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:00:57.880004  497460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:00:57.880042  497460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:00:57.880132  497460 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:00:57.880138  497460 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:00:57.880162  497460 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:00:57.880212  497460 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-729486 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-729486]
	I0110 10:00:58.122888  497460 provision.go:177] copyRemoteCerts
	I0110 10:00:58.122962  497460 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:00:58.123018  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:58.141037  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:00:58.244571  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:00:58.262446  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 10:00:58.280171  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:00:58.297120  497460 provision.go:87] duration metric: took 435.700382ms to configureAuth
	I0110 10:00:58.297189  497460 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:00:58.297386  497460 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:00:58.297494  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:58.314403  497460 main.go:144] libmachine: Using SSH client type: native
	I0110 10:00:58.314706  497460 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33419 <nil> <nil>}
	I0110 10:00:58.314726  497460 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:00:58.624581  497460 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:00:58.624607  497460 machine.go:97] duration metric: took 4.283604908s to provisionDockerMachine
	I0110 10:00:58.624619  497460 client.go:176] duration metric: took 10.765017515s to LocalClient.Create
	I0110 10:00:58.624633  497460 start.go:167] duration metric: took 10.765067625s to libmachine.API.Create "old-k8s-version-729486"
	I0110 10:00:58.624640  497460 start.go:293] postStartSetup for "old-k8s-version-729486" (driver="docker")
	I0110 10:00:58.624650  497460 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:00:58.624730  497460 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:00:58.624777  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:58.644649  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:00:58.748428  497460 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:00:58.751702  497460 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:00:58.751734  497460 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:00:58.751747  497460 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:00:58.751805  497460 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:00:58.751893  497460 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:00:58.752001  497460 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:00:58.759591  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:00:58.776936  497460 start.go:296] duration metric: took 152.281831ms for postStartSetup
	I0110 10:00:58.777327  497460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:00:58.793661  497460 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json ...
	I0110 10:00:58.793941  497460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:00:58.793991  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:58.809847  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:00:58.909606  497460 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:00:58.914284  497460 start.go:128] duration metric: took 11.05833633s to createHost
	I0110 10:00:58.914314  497460 start.go:83] releasing machines lock for "old-k8s-version-729486", held for 11.058467835s
	I0110 10:00:58.914404  497460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:00:58.930283  497460 ssh_runner.go:195] Run: cat /version.json
	I0110 10:00:58.930341  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:58.930590  497460 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:00:58.930657  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:00:58.955531  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:00:58.957244  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:00:59.159029  497460 ssh_runner.go:195] Run: systemctl --version
	I0110 10:00:59.167132  497460 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:00:59.222222  497460 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:00:59.227154  497460 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:00:59.227256  497460 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:00:59.256410  497460 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:00:59.256443  497460 start.go:496] detecting cgroup driver to use...
	I0110 10:00:59.256529  497460 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:00:59.256614  497460 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:00:59.275033  497460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:00:59.287752  497460 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:00:59.287823  497460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:00:59.305746  497460 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:00:59.324650  497460 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:00:59.444877  497460 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:00:59.569998  497460 docker.go:234] disabling docker service ...
	I0110 10:00:59.570068  497460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:00:59.591394  497460 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:00:59.608421  497460 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:00:59.728619  497460 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:00:59.850393  497460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:00:59.864064  497460 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:00:59.878342  497460 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 10:00:59.878421  497460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.886851  497460 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:00:59.886967  497460 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.896069  497460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.909578  497460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.919142  497460 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:00:59.927815  497460 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.937259  497460 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.951826  497460 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:00:59.961572  497460 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:00:59.970354  497460 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:00:59.977900  497460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:01:00.334602  497460 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:01:00.580852  497460 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:01:00.580928  497460 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:01:00.585159  497460 start.go:574] Will wait 60s for crictl version
	I0110 10:01:00.585224  497460 ssh_runner.go:195] Run: which crictl
	I0110 10:01:00.589376  497460 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:01:00.619577  497460 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:01:00.619718  497460 ssh_runner.go:195] Run: crio --version
	I0110 10:01:00.653102  497460 ssh_runner.go:195] Run: crio --version
	I0110 10:01:00.691466  497460 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0110 10:01:00.695125  497460 cli_runner.go:164] Run: docker network inspect old-k8s-version-729486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:01:00.718543  497460 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:01:00.722817  497460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:01:00.733528  497460 kubeadm.go:884] updating cluster {Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:01:00.733680  497460 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:01:00.733739  497460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:01:00.770119  497460 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:01:00.770145  497460 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:01:00.770201  497460 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:01:00.796600  497460 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:01:00.796624  497460 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:01:00.796633  497460 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I0110 10:01:00.796753  497460 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-729486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:01:00.796833  497460 ssh_runner.go:195] Run: crio config
	I0110 10:01:00.852226  497460 cni.go:84] Creating CNI manager for ""
	I0110 10:01:00.852253  497460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:01:00.852315  497460 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:01:00.852353  497460 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-729486 NodeName:old-k8s-version-729486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:01:00.852505  497460 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-729486"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:01:00.852599  497460 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0110 10:01:00.860414  497460 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:01:00.860485  497460 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:01:00.868116  497460 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0110 10:01:00.880829  497460 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:01:00.893760  497460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0110 10:01:00.906947  497460 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:01:00.910478  497460 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:01:00.919971  497460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:01:01.035020  497460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:01:01.051921  497460 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486 for IP: 192.168.76.2
	I0110 10:01:01.051943  497460 certs.go:195] generating shared ca certs ...
	I0110 10:01:01.051971  497460 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.052129  497460 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:01:01.052186  497460 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:01:01.052200  497460 certs.go:257] generating profile certs ...
	I0110 10:01:01.052255  497460 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.key
	I0110 10:01:01.052279  497460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt with IP's: []
	I0110 10:01:01.089488  497460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt ...
	I0110 10:01:01.089522  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: {Name:mk63e8562f681c01791f0f8e1ad3e2dec5f4f0fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.089720  497460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.key ...
	I0110 10:01:01.089739  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.key: {Name:mkbf1d1bb0fdf2c6d5bb2ddd4dc33c8219c53633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.089830  497460 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key.3e623c7c
	I0110 10:01:01.089851  497460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt.3e623c7c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 10:01:01.170330  497460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt.3e623c7c ...
	I0110 10:01:01.170368  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt.3e623c7c: {Name:mk41e5db30444a869500db1ba0552cf297983d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.170578  497460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key.3e623c7c ...
	I0110 10:01:01.170593  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key.3e623c7c: {Name:mk1f45033ec0b4daaa983e0813c949079793fa92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.170697  497460 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt.3e623c7c -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt
	I0110 10:01:01.170798  497460 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key.3e623c7c -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key
	I0110 10:01:01.170865  497460 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key
	I0110 10:01:01.170891  497460 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.crt with IP's: []
	I0110 10:01:01.534836  497460 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.crt ...
	I0110 10:01:01.534878  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.crt: {Name:mk936b3fc943723a1f00d0532f1b6e8a7a3be831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.535085  497460 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key ...
	I0110 10:01:01.535099  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key: {Name:mkb7384d697f48030c06db6d07b2114c9aa6466e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:01.535307  497460 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:01:01.535358  497460 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:01:01.535375  497460 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:01:01.535404  497460 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:01:01.535429  497460 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:01:01.535458  497460 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:01:01.535511  497460 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:01:01.536118  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:01:01.555702  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:01:01.575763  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:01:01.594790  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:01:01.615234  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0110 10:01:01.634358  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:01:01.653217  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:01:01.671556  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:01:01.689740  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:01:01.708166  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:01:01.726779  497460 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:01:01.746074  497460 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:01:01.759710  497460 ssh_runner.go:195] Run: openssl version
	I0110 10:01:01.766438  497460 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:01:01.774925  497460 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:01:01.782953  497460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:01:01.787087  497460 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:01:01.787168  497460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:01:01.828431  497460 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:01:01.836261  497460 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:01:01.844307  497460 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:01:01.852095  497460 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:01:01.860137  497460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:01:01.863936  497460 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:01:01.864007  497460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:01:01.905568  497460 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:01:01.914203  497460 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:01:01.921999  497460 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:01:01.930071  497460 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:01:01.938310  497460 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:01:01.942726  497460 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:01:01.942792  497460 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:01:01.989316  497460 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:01:02.001683  497460 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:01:02.013043  497460 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:01:02.017582  497460 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:01:02.017677  497460 kubeadm.go:401] StartCluster: {Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:01:02.017772  497460 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:01:02.017833  497460 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:01:02.046171  497460 cri.go:96] found id: ""
	I0110 10:01:02.046293  497460 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:01:02.054828  497460 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:01:02.063198  497460 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:01:02.063315  497460 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:01:02.071454  497460 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:01:02.071526  497460 kubeadm.go:158] found existing configuration files:
	
	I0110 10:01:02.071603  497460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:01:02.079962  497460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:01:02.080087  497460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:01:02.087975  497460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:01:02.096158  497460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:01:02.096314  497460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:01:02.104489  497460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:01:02.113632  497460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:01:02.113752  497460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:01:02.121402  497460 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:01:02.129787  497460 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:01:02.129866  497460 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:01:02.137009  497460 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:01:02.204671  497460 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I0110 10:01:02.204952  497460 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:01:02.244889  497460 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:01:02.245032  497460 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:01:02.245088  497460 kubeadm.go:319] OS: Linux
	I0110 10:01:02.245167  497460 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:01:02.245245  497460 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:01:02.245319  497460 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:01:02.245397  497460 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:01:02.245471  497460 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:01:02.245574  497460 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:01:02.245655  497460 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:01:02.245743  497460 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:01:02.245827  497460 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:01:02.331059  497460 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:01:02.331250  497460 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:01:02.331401  497460 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0110 10:01:02.493118  497460 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:01:02.499433  497460 out.go:252]   - Generating certificates and keys ...
	I0110 10:01:02.499538  497460 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:01:02.499611  497460 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:01:02.831473  497460 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:01:03.252071  497460 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:01:03.805789  497460 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:01:04.276083  497460 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:01:04.491646  497460 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:01:04.491982  497460 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-729486] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:01:04.940752  497460 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:01:04.941329  497460 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-729486] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:01:05.579878  497460 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:01:06.085965  497460 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:01:07.348448  497460 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:01:07.348772  497460 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:01:07.894142  497460 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:01:08.884916  497460 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:01:09.659468  497460 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:01:10.198543  497460 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:01:10.199619  497460 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:01:10.202836  497460 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:01:10.206260  497460 out.go:252]   - Booting up control plane ...
	I0110 10:01:10.206379  497460 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:01:10.206464  497460 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:01:10.209698  497460 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:01:10.231409  497460 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:01:10.232327  497460 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:01:10.232699  497460 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:01:10.368632  497460 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0110 10:01:17.371821  497460 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.003396 seconds
	I0110 10:01:17.371948  497460 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 10:01:17.389193  497460 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 10:01:17.923657  497460 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 10:01:17.923866  497460 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-729486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 10:01:18.436400  497460 kubeadm.go:319] [bootstrap-token] Using token: 72uv3q.ae02tfknh69dd897
	I0110 10:01:18.439307  497460 out.go:252]   - Configuring RBAC rules ...
	I0110 10:01:18.439430  497460 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 10:01:18.447597  497460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 10:01:18.456125  497460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 10:01:18.460640  497460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 10:01:18.467685  497460 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 10:01:18.471758  497460 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 10:01:18.489019  497460 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 10:01:18.799531  497460 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 10:01:18.873210  497460 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 10:01:18.875014  497460 kubeadm.go:319] 
	I0110 10:01:18.875087  497460 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 10:01:18.875093  497460 kubeadm.go:319] 
	I0110 10:01:18.875170  497460 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 10:01:18.875176  497460 kubeadm.go:319] 
	I0110 10:01:18.875201  497460 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 10:01:18.875704  497460 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 10:01:18.875762  497460 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 10:01:18.875766  497460 kubeadm.go:319] 
	I0110 10:01:18.875827  497460 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 10:01:18.875832  497460 kubeadm.go:319] 
	I0110 10:01:18.875880  497460 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 10:01:18.875883  497460 kubeadm.go:319] 
	I0110 10:01:18.875949  497460 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 10:01:18.876037  497460 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 10:01:18.876117  497460 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 10:01:18.876122  497460 kubeadm.go:319] 
	I0110 10:01:18.876478  497460 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 10:01:18.876594  497460 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 10:01:18.876603  497460 kubeadm.go:319] 
	I0110 10:01:18.876964  497460 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 72uv3q.ae02tfknh69dd897 \
	I0110 10:01:18.877080  497460 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e \
	I0110 10:01:18.877322  497460 kubeadm.go:319] 	--control-plane 
	I0110 10:01:18.877338  497460 kubeadm.go:319] 
	I0110 10:01:18.877648  497460 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 10:01:18.877658  497460 kubeadm.go:319] 
	I0110 10:01:18.877986  497460 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 72uv3q.ae02tfknh69dd897 \
	I0110 10:01:18.878356  497460 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e 
	I0110 10:01:18.882564  497460 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:01:18.882687  497460 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:01:18.882704  497460 cni.go:84] Creating CNI manager for ""
	I0110 10:01:18.882711  497460 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:01:18.885892  497460 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 10:01:18.888711  497460 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 10:01:18.894059  497460 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I0110 10:01:18.894081  497460 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 10:01:18.925756  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 10:01:19.916466  497460 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 10:01:19.916683  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:19.916791  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-729486 minikube.k8s.io/updated_at=2026_01_10T10_01_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=old-k8s-version-729486 minikube.k8s.io/primary=true
	I0110 10:01:20.063311  497460 ops.go:34] apiserver oom_adj: -16
	I0110 10:01:20.063506  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:20.564651  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:21.063786  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:21.563635  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:22.063633  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:22.563730  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:23.063885  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:23.564216  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:24.064518  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:24.564527  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:25.064155  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:25.563949  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:26.063775  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:26.564618  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:27.064119  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:27.564225  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:28.064617  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:28.564245  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:29.064559  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:29.564276  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:30.063973  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:30.564088  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:31.064286  497460 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:01:31.167014  497460 kubeadm.go:1114] duration metric: took 11.250401297s to wait for elevateKubeSystemPrivileges
	I0110 10:01:31.167047  497460 kubeadm.go:403] duration metric: took 29.149375024s to StartCluster
	I0110 10:01:31.167066  497460 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:31.167131  497460 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:01:31.167777  497460 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:01:31.168004  497460 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:01:31.168140  497460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 10:01:31.168407  497460 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:01:31.168457  497460 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:01:31.168556  497460 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-729486"
	I0110 10:01:31.168579  497460 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-729486"
	I0110 10:01:31.168607  497460 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:01:31.169130  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:01:31.169644  497460 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-729486"
	I0110 10:01:31.169665  497460 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-729486"
	I0110 10:01:31.169940  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:01:31.174371  497460 out.go:179] * Verifying Kubernetes components...
	I0110 10:01:31.179090  497460 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:01:31.212890  497460 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:01:31.215798  497460 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:01:31.215819  497460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:01:31.215872  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:01:31.216483  497460 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-729486"
	I0110 10:01:31.216531  497460 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:01:31.216973  497460 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:01:31.249468  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:01:31.258077  497460 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:01:31.258104  497460 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:01:31.258173  497460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:01:31.289446  497460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33419 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:01:31.550048  497460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:01:31.597548  497460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:01:31.661936  497460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 10:01:31.662099  497460 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:01:32.627903  497460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.07781902s)
	I0110 10:01:32.627969  497460 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.030399101s)
	I0110 10:01:32.629046  497460 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-729486" to be "Ready" ...
	I0110 10:01:32.629293  497460 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 10:01:32.693750  497460 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 10:01:32.696863  497460 addons.go:530] duration metric: took 1.528399328s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 10:01:33.133607  497460 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-729486" context rescaled to 1 replicas
	W0110 10:01:34.632955  497460 node_ready.go:57] node "old-k8s-version-729486" has "Ready":"False" status (will retry)
	W0110 10:01:36.633796  497460 node_ready.go:57] node "old-k8s-version-729486" has "Ready":"False" status (will retry)
	W0110 10:01:39.132734  497460 node_ready.go:57] node "old-k8s-version-729486" has "Ready":"False" status (will retry)
	W0110 10:01:41.133191  497460 node_ready.go:57] node "old-k8s-version-729486" has "Ready":"False" status (will retry)
	W0110 10:01:43.632915  497460 node_ready.go:57] node "old-k8s-version-729486" has "Ready":"False" status (will retry)
	I0110 10:01:45.633041  497460 node_ready.go:49] node "old-k8s-version-729486" is "Ready"
	I0110 10:01:45.633071  497460 node_ready.go:38] duration metric: took 13.003997352s for node "old-k8s-version-729486" to be "Ready" ...
	I0110 10:01:45.633088  497460 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:01:45.633149  497460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:01:45.646149  497460 api_server.go:72] duration metric: took 14.478108636s to wait for apiserver process to appear ...
	I0110 10:01:45.646179  497460 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:01:45.646200  497460 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:01:45.656080  497460 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:01:45.657936  497460 api_server.go:141] control plane version: v1.28.0
	I0110 10:01:45.657968  497460 api_server.go:131] duration metric: took 11.780509ms to wait for apiserver health ...
	I0110 10:01:45.657978  497460 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:01:45.663434  497460 system_pods.go:59] 8 kube-system pods found
	I0110 10:01:45.663508  497460 system_pods.go:61] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:01:45.663521  497460 system_pods.go:61] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running
	I0110 10:01:45.663528  497460 system_pods.go:61] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:01:45.663532  497460 system_pods.go:61] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running
	I0110 10:01:45.663542  497460 system_pods.go:61] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running
	I0110 10:01:45.663547  497460 system_pods.go:61] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:01:45.663551  497460 system_pods.go:61] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running
	I0110 10:01:45.663564  497460 system_pods.go:61] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:01:45.663569  497460 system_pods.go:74] duration metric: took 5.586575ms to wait for pod list to return data ...
	I0110 10:01:45.663589  497460 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:01:45.666691  497460 default_sa.go:45] found service account: "default"
	I0110 10:01:45.666719  497460 default_sa.go:55] duration metric: took 3.122994ms for default service account to be created ...
	I0110 10:01:45.666729  497460 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:01:45.671488  497460 system_pods.go:86] 8 kube-system pods found
	I0110 10:01:45.671523  497460 system_pods.go:89] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:01:45.671530  497460 system_pods.go:89] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running
	I0110 10:01:45.671537  497460 system_pods.go:89] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:01:45.671542  497460 system_pods.go:89] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running
	I0110 10:01:45.671548  497460 system_pods.go:89] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running
	I0110 10:01:45.671560  497460 system_pods.go:89] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:01:45.671568  497460 system_pods.go:89] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running
	I0110 10:01:45.671575  497460 system_pods.go:89] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:01:45.671611  497460 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 10:01:45.904165  497460 system_pods.go:86] 8 kube-system pods found
	I0110 10:01:45.904200  497460 system_pods.go:89] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:01:45.904208  497460 system_pods.go:89] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running
	I0110 10:01:45.904215  497460 system_pods.go:89] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:01:45.904221  497460 system_pods.go:89] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running
	I0110 10:01:45.904226  497460 system_pods.go:89] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running
	I0110 10:01:45.904231  497460 system_pods.go:89] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:01:45.904235  497460 system_pods.go:89] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running
	I0110 10:01:45.904242  497460 system_pods.go:89] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:01:46.154063  497460 system_pods.go:86] 8 kube-system pods found
	I0110 10:01:46.154189  497460 system_pods.go:89] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:01:46.154212  497460 system_pods.go:89] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running
	I0110 10:01:46.154253  497460 system_pods.go:89] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:01:46.154278  497460 system_pods.go:89] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running
	I0110 10:01:46.154301  497460 system_pods.go:89] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running
	I0110 10:01:46.154341  497460 system_pods.go:89] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:01:46.154370  497460 system_pods.go:89] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running
	I0110 10:01:46.154394  497460 system_pods.go:89] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:01:46.501432  497460 system_pods.go:86] 8 kube-system pods found
	I0110 10:01:46.501467  497460 system_pods.go:89] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Running
	I0110 10:01:46.501475  497460 system_pods.go:89] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running
	I0110 10:01:46.501480  497460 system_pods.go:89] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:01:46.501485  497460 system_pods.go:89] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running
	I0110 10:01:46.501491  497460 system_pods.go:89] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running
	I0110 10:01:46.501496  497460 system_pods.go:89] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:01:46.501500  497460 system_pods.go:89] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running
	I0110 10:01:46.501505  497460 system_pods.go:89] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Running
	I0110 10:01:46.501517  497460 system_pods.go:126] duration metric: took 834.78128ms to wait for k8s-apps to be running ...
	I0110 10:01:46.501525  497460 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:01:46.501590  497460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:01:46.514766  497460 system_svc.go:56] duration metric: took 13.231351ms WaitForService to wait for kubelet
	I0110 10:01:46.514794  497460 kubeadm.go:587] duration metric: took 15.346758447s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:01:46.514814  497460 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:01:46.517553  497460 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:01:46.517586  497460 node_conditions.go:123] node cpu capacity is 2
	I0110 10:01:46.517602  497460 node_conditions.go:105] duration metric: took 2.751543ms to run NodePressure ...
	I0110 10:01:46.517640  497460 start.go:242] waiting for startup goroutines ...
	I0110 10:01:46.517648  497460 start.go:247] waiting for cluster config update ...
	I0110 10:01:46.517670  497460 start.go:256] writing updated cluster config ...
	I0110 10:01:46.517959  497460 ssh_runner.go:195] Run: rm -f paused
	I0110 10:01:46.521738  497460 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:01:46.525781  497460 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xsgtg" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.530552  497460 pod_ready.go:94] pod "coredns-5dd5756b68-xsgtg" is "Ready"
	I0110 10:01:46.530577  497460 pod_ready.go:86] duration metric: took 4.768291ms for pod "coredns-5dd5756b68-xsgtg" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.533725  497460 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.538422  497460 pod_ready.go:94] pod "etcd-old-k8s-version-729486" is "Ready"
	I0110 10:01:46.538448  497460 pod_ready.go:86] duration metric: took 4.697891ms for pod "etcd-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.541475  497460 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.546396  497460 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-729486" is "Ready"
	I0110 10:01:46.546420  497460 pod_ready.go:86] duration metric: took 4.920949ms for pod "kube-apiserver-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.549265  497460 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:46.926146  497460 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-729486" is "Ready"
	I0110 10:01:46.926168  497460 pod_ready.go:86] duration metric: took 376.875689ms for pod "kube-controller-manager-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:47.126060  497460 pod_ready.go:83] waiting for pod "kube-proxy-szwsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:47.525521  497460 pod_ready.go:94] pod "kube-proxy-szwsd" is "Ready"
	I0110 10:01:47.525591  497460 pod_ready.go:86] duration metric: took 399.503427ms for pod "kube-proxy-szwsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:47.726697  497460 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:48.126292  497460 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-729486" is "Ready"
	I0110 10:01:48.126320  497460 pod_ready.go:86] duration metric: took 399.588941ms for pod "kube-scheduler-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:01:48.126334  497460 pod_ready.go:40] duration metric: took 1.604564518s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:01:48.181155  497460 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0110 10:01:48.184289  497460 out.go:203] 
	W0110 10:01:48.187054  497460 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 10:01:48.189918  497460 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:01:48.193586  497460 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-729486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:01:45 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:45.990565694Z" level=info msg="Created container 4411cf95d61e06a6bbf16a27989d962a157aa3016cbf1e7a93fab18160d9f91c: kube-system/coredns-5dd5756b68-xsgtg/coredns" id=cf4516ed-6109-4e94-8a4b-41c71d3b950a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:01:45 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:45.991541592Z" level=info msg="Starting container: 4411cf95d61e06a6bbf16a27989d962a157aa3016cbf1e7a93fab18160d9f91c" id=4b270467-36d4-472c-96d0-9bca1e9e543b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:01:45 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:45.993365706Z" level=info msg="Started container" PID=1961 containerID=4411cf95d61e06a6bbf16a27989d962a157aa3016cbf1e7a93fab18160d9f91c description=kube-system/coredns-5dd5756b68-xsgtg/coredns id=4b270467-36d4-472c-96d0-9bca1e9e543b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b60e29f4d57cb4fc94ae20bd064286b24052360bd01cd95cc5e94fda049ecfcc
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.706107892Z" level=info msg="Running pod sandbox: default/busybox/POD" id=71769a73-1608-454f-bb6a-266b0ff01e55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.706181936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.711458549Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:70da9679b093e16790114593f5975b4abc5abdc387b6dd65581389e2e1530ac9 UID:9cfed1f7-4d02-4c7d-acf4-33d7165fff27 NetNS:/var/run/netns/172a90b3-0e2e-45c0-ae4f-8eb39aa9324b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000065ca8}] Aliases:map[]}"
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.711493052Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.733963931Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:70da9679b093e16790114593f5975b4abc5abdc387b6dd65581389e2e1530ac9 UID:9cfed1f7-4d02-4c7d-acf4-33d7165fff27 NetNS:/var/run/netns/172a90b3-0e2e-45c0-ae4f-8eb39aa9324b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000065ca8}] Aliases:map[]}"
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.734115555Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.736728143Z" level=info msg="Ran pod sandbox 70da9679b093e16790114593f5975b4abc5abdc387b6dd65581389e2e1530ac9 with infra container: default/busybox/POD" id=71769a73-1608-454f-bb6a-266b0ff01e55 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.738669912Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d7228b46-83e1-4447-8b1b-9209222ee6db name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.739700522Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d7228b46-83e1-4447-8b1b-9209222ee6db name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.739945834Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d7228b46-83e1-4447-8b1b-9209222ee6db name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.742036962Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a7d4f8a-2a55-450b-a07f-67d6e9a47957 name=/runtime.v1.ImageService/PullImage
	Jan 10 10:01:48 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:48.742615152Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.85811676Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8a7d4f8a-2a55-450b-a07f-67d6e9a47957 name=/runtime.v1.ImageService/PullImage
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.859335426Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8fb3145b-a202-463f-9931-226a020ef887 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.862418231Z" level=info msg="Creating container: default/busybox/busybox" id=1f5f639d-8dad-4547-a4fa-6936cf44d53e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.862556908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.867825973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.868669413Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.883411913Z" level=info msg="Created container 6d5902d188e7da0324af3100f1190507d317d62524726ebd2becd7d9d1fb1699: default/busybox/busybox" id=1f5f639d-8dad-4547-a4fa-6936cf44d53e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.886751747Z" level=info msg="Starting container: 6d5902d188e7da0324af3100f1190507d317d62524726ebd2becd7d9d1fb1699" id=39d76f02-0499-439b-94e4-42a920569912 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:01:50 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:50.889237728Z" level=info msg="Started container" PID=2019 containerID=6d5902d188e7da0324af3100f1190507d317d62524726ebd2becd7d9d1fb1699 description=default/busybox/busybox id=39d76f02-0499-439b-94e4-42a920569912 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70da9679b093e16790114593f5975b4abc5abdc387b6dd65581389e2e1530ac9
	Jan 10 10:01:56 old-k8s-version-729486 crio[833]: time="2026-01-10T10:01:56.61027562Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	6d5902d188e7d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   70da9679b093e       busybox                                          default
	4411cf95d61e0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   b60e29f4d57cb       coredns-5dd5756b68-xsgtg                         kube-system
	f6db3a215cc1f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   2e223fa7b8bb4       storage-provisioner                              kube-system
	97d45770cec06       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    23 seconds ago      Running             kindnet-cni               0                   8d9435aa5df44       kindnet-mcvws                                    kube-system
	99d8f893c2800       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   2d66c21040101       kube-proxy-szwsd                                 kube-system
	fc1c131250cb8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   c4371e44f5d03       kube-controller-manager-old-k8s-version-729486   kube-system
	f04d3c6f82d3c       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   76bcc5d032b68       kube-scheduler-old-k8s-version-729486            kube-system
	a323024eb2810       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   8442605c8b900       etcd-old-k8s-version-729486                      kube-system
	7b0ec5e2101fe       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   d5f0ca937ec9e       kube-apiserver-old-k8s-version-729486            kube-system
	
	
	==> coredns [4411cf95d61e06a6bbf16a27989d962a157aa3016cbf1e7a93fab18160d9f91c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57985 - 28193 "HINFO IN 5351466616320913336.3218537834776436188. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011164763s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-729486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-729486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=old-k8s-version-729486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_01_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:01:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-729486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:01:49 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:01:49 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:01:49 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:01:49 +0000   Sat, 10 Jan 2026 10:01:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-729486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                6835df29-9649-49d8-a5dc-2264bb66093f
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-xsgtg                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-729486                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-mcvws                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-729486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-729486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-szwsd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-729486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-729486 event: Registered Node old-k8s-version-729486 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-729486 status is now: NodeReady
	
	
	==> dmesg <==
	[  +3.010233] overlayfs: idmapped layers are currently not supported
	[Jan10 09:29] overlayfs: idmapped layers are currently not supported
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a323024eb2810b580ca4f7ffa1171892e3597d58b5b28c216ca40d53be2033ba] <==
	{"level":"info","ts":"2026-01-10T10:01:11.887758Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:01:11.886925Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2026-01-10T10:01:11.88706Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:01:11.887911Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:01:11.887994Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:01:11.887345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:01:11.888259Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2026-01-10T10:01:12.852554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T10:01:12.852603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T10:01:12.852619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T10:01:12.852632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:01:12.852639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:01:12.852649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T10:01:12.852657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:01:12.856695Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-729486 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:01:12.856818Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:01:12.856956Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:01:12.857929Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:01:12.862631Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:01:12.862717Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:01:12.862781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:01:12.863928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:01:12.864238Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:01:12.864528Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:01:12.864592Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:01:58 up  2:44,  0 user,  load average: 1.79, 1.50, 1.97
	Linux old-k8s-version-729486 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [97d45770cec0633c1d1a3bdfe85ac9b86677219b93b77ab18000baceb2a2196d] <==
	I0110 10:01:34.817903       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:01:34.818193       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:01:34.818368       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:01:34.818388       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:01:34.818402       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:01:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:01:35.019156       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:01:35.019230       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:01:35.019283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:01:35.020150       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 10:01:35.122216       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:01:35.122368       1 metrics.go:72] Registering metrics
	I0110 10:01:35.122479       1 controller.go:711] "Syncing nftables rules"
	I0110 10:01:45.021189       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:01:45.021366       1 main.go:301] handling current node
	I0110 10:01:55.020870       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:01:55.021009       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7b0ec5e2101febde6822cb861edf7167acd77d1e731600d0ad7696633a183daa] <==
	I0110 10:01:15.835086       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 10:01:15.847109       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 10:01:15.848381       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 10:01:15.852327       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 10:01:15.852454       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 10:01:15.852677       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 10:01:15.852659       1 aggregator.go:166] initial CRD sync complete...
	I0110 10:01:15.852999       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 10:01:15.853008       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:01:15.853017       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:01:16.453194       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0110 10:01:16.463493       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0110 10:01:16.463659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 10:01:17.071270       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:01:17.153559       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:01:17.273912       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 10:01:17.286901       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 10:01:17.288060       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 10:01:17.295307       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:01:17.673169       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 10:01:18.781172       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 10:01:18.798136       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 10:01:18.809918       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0110 10:01:31.292490       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 10:01:31.393920       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [fc1c131250cb8b81e8ae0bc1996f1aa0ef3e630a9c57555bded6da43f3e6f6a0] <==
	I0110 10:01:30.666963       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0110 10:01:30.667005       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0110 10:01:30.667025       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0110 10:01:30.670262       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0110 10:01:31.078086       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 10:01:31.078199       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 10:01:31.088003       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 10:01:31.304896       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0110 10:01:31.422926       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mcvws"
	I0110 10:01:31.485813       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-szwsd"
	I0110 10:01:31.653259       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lhjf6"
	I0110 10:01:31.693639       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xsgtg"
	I0110 10:01:31.732761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="427.600762ms"
	I0110 10:01:31.748162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.572519ms"
	I0110 10:01:31.748242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.039µs"
	I0110 10:01:32.705156       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0110 10:01:32.746152       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-lhjf6"
	I0110 10:01:32.771724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.494246ms"
	I0110 10:01:32.780794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.023321ms"
	I0110 10:01:32.780958       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.512µs"
	I0110 10:01:45.593519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.073µs"
	I0110 10:01:45.617038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.608µs"
	I0110 10:01:46.287169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.399076ms"
	I0110 10:01:46.287707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.321µs"
	I0110 10:01:50.535543       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [99d8f893c2800a8f84d468fe6689e8e86bd57f28aa1a8f0e8da24ea5af70eb83] <==
	I0110 10:01:31.994043       1 server_others.go:69] "Using iptables proxy"
	I0110 10:01:32.014200       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0110 10:01:32.060733       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:01:32.062788       1 server_others.go:152] "Using iptables Proxier"
	I0110 10:01:32.062818       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 10:01:32.062825       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 10:01:32.062858       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 10:01:32.063057       1 server.go:846] "Version info" version="v1.28.0"
	I0110 10:01:32.063078       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:01:32.064425       1 config.go:188] "Starting service config controller"
	I0110 10:01:32.064437       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 10:01:32.064454       1 config.go:97] "Starting endpoint slice config controller"
	I0110 10:01:32.064457       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 10:01:32.066273       1 config.go:315] "Starting node config controller"
	I0110 10:01:32.066287       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 10:01:32.165319       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 10:01:32.165326       1 shared_informer.go:318] Caches are synced for service config
	I0110 10:01:32.166486       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f04d3c6f82d3ca01493226fa29033ed9ad2892541b71fe24aabd8ea9f84f2728] <==
	W0110 10:01:15.782195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0110 10:01:15.782209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0110 10:01:15.782267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0110 10:01:15.782282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0110 10:01:15.782322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0110 10:01:15.782338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0110 10:01:15.781872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0110 10:01:15.782355       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0110 10:01:15.782447       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 10:01:15.782525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0110 10:01:15.783245       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0110 10:01:15.783276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0110 10:01:16.721844       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0110 10:01:16.722011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0110 10:01:16.742010       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0110 10:01:16.742052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0110 10:01:16.835431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0110 10:01:16.835486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0110 10:01:16.862582       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0110 10:01:16.862620       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0110 10:01:16.862680       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 10:01:16.862697       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0110 10:01:16.901719       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0110 10:01:16.901823       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0110 10:01:19.170710       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698355    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a148c52-2e43-474d-accb-ff93db5e4756-lib-modules\") pod \"kindnet-mcvws\" (UID: \"9a148c52-2e43-474d-accb-ff93db5e4756\") " pod="kube-system/kindnet-mcvws"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698404    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/550b3042-ef9d-4e44-978b-f18534dc02bb-lib-modules\") pod \"kube-proxy-szwsd\" (UID: \"550b3042-ef9d-4e44-978b-f18534dc02bb\") " pod="kube-system/kube-proxy-szwsd"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698433    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6gr\" (UniqueName: \"kubernetes.io/projected/550b3042-ef9d-4e44-978b-f18534dc02bb-kube-api-access-6t6gr\") pod \"kube-proxy-szwsd\" (UID: \"550b3042-ef9d-4e44-978b-f18534dc02bb\") " pod="kube-system/kube-proxy-szwsd"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698462    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9a148c52-2e43-474d-accb-ff93db5e4756-cni-cfg\") pod \"kindnet-mcvws\" (UID: \"9a148c52-2e43-474d-accb-ff93db5e4756\") " pod="kube-system/kindnet-mcvws"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698485    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a148c52-2e43-474d-accb-ff93db5e4756-xtables-lock\") pod \"kindnet-mcvws\" (UID: \"9a148c52-2e43-474d-accb-ff93db5e4756\") " pod="kube-system/kindnet-mcvws"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698508    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8z8\" (UniqueName: \"kubernetes.io/projected/9a148c52-2e43-474d-accb-ff93db5e4756-kube-api-access-9g8z8\") pod \"kindnet-mcvws\" (UID: \"9a148c52-2e43-474d-accb-ff93db5e4756\") " pod="kube-system/kindnet-mcvws"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698533    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/550b3042-ef9d-4e44-978b-f18534dc02bb-xtables-lock\") pod \"kube-proxy-szwsd\" (UID: \"550b3042-ef9d-4e44-978b-f18534dc02bb\") " pod="kube-system/kube-proxy-szwsd"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: I0110 10:01:31.698555    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/550b3042-ef9d-4e44-978b-f18534dc02bb-kube-proxy\") pod \"kube-proxy-szwsd\" (UID: \"550b3042-ef9d-4e44-978b-f18534dc02bb\") " pod="kube-system/kube-proxy-szwsd"
	Jan 10 10:01:31 old-k8s-version-729486 kubelet[1387]: W0110 10:01:31.842459    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-2d66c21040101bd7b260ad78174b80c8d5db4a49b18b2c1e00889a0d9e2238f4 WatchSource:0}: Error finding container 2d66c21040101bd7b260ad78174b80c8d5db4a49b18b2c1e00889a0d9e2238f4: Status 404 returned error can't find the container with id 2d66c21040101bd7b260ad78174b80c8d5db4a49b18b2c1e00889a0d9e2238f4
	Jan 10 10:01:32 old-k8s-version-729486 kubelet[1387]: W0110 10:01:32.112206    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-8d9435aa5df44070250ea1d6f7b286aa7b0d1b24130cead1258f9cde25a5339e WatchSource:0}: Error finding container 8d9435aa5df44070250ea1d6f7b286aa7b0d1b24130cead1258f9cde25a5339e: Status 404 returned error can't find the container with id 8d9435aa5df44070250ea1d6f7b286aa7b0d1b24130cead1258f9cde25a5339e
	Jan 10 10:01:32 old-k8s-version-729486 kubelet[1387]: I0110 10:01:32.236031    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-szwsd" podStartSLOduration=1.235990064 podCreationTimestamp="2026-01-10 10:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:01:32.235546267 +0000 UTC m=+13.485941657" watchObservedRunningTime="2026-01-10 10:01:32.235990064 +0000 UTC m=+13.486385454"
	Jan 10 10:01:39 old-k8s-version-729486 kubelet[1387]: I0110 10:01:39.030912    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mcvws" podStartSLOduration=5.534874992 podCreationTimestamp="2026-01-10 10:01:31 +0000 UTC" firstStartedPulling="2026-01-10 10:01:32.118075967 +0000 UTC m=+13.368471357" lastFinishedPulling="2026-01-10 10:01:34.614067847 +0000 UTC m=+15.864463237" observedRunningTime="2026-01-10 10:01:35.238100222 +0000 UTC m=+16.488495621" watchObservedRunningTime="2026-01-10 10:01:39.030866872 +0000 UTC m=+20.281262254"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.523875    1387 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.591224    1387 topology_manager.go:215] "Topology Admit Handler" podUID="c3718681-9e27-4160-b9fa-8462b5c71a26" podNamespace="kube-system" podName="coredns-5dd5756b68-xsgtg"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.594138    1387 topology_manager.go:215] "Topology Admit Handler" podUID="016f019c-d231-41db-b408-7bc9e1fb613e" podNamespace="kube-system" podName="storage-provisioner"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.701651    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jk2\" (UniqueName: \"kubernetes.io/projected/c3718681-9e27-4160-b9fa-8462b5c71a26-kube-api-access-g7jk2\") pod \"coredns-5dd5756b68-xsgtg\" (UID: \"c3718681-9e27-4160-b9fa-8462b5c71a26\") " pod="kube-system/coredns-5dd5756b68-xsgtg"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.701718    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/016f019c-d231-41db-b408-7bc9e1fb613e-tmp\") pod \"storage-provisioner\" (UID: \"016f019c-d231-41db-b408-7bc9e1fb613e\") " pod="kube-system/storage-provisioner"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.701765    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3718681-9e27-4160-b9fa-8462b5c71a26-config-volume\") pod \"coredns-5dd5756b68-xsgtg\" (UID: \"c3718681-9e27-4160-b9fa-8462b5c71a26\") " pod="kube-system/coredns-5dd5756b68-xsgtg"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: I0110 10:01:45.701792    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrqx\" (UniqueName: \"kubernetes.io/projected/016f019c-d231-41db-b408-7bc9e1fb613e-kube-api-access-dcrqx\") pod \"storage-provisioner\" (UID: \"016f019c-d231-41db-b408-7bc9e1fb613e\") " pod="kube-system/storage-provisioner"
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: W0110 10:01:45.908127    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-2e223fa7b8bb4e36637b63a78a88b87b882389eb493ec3419a5aec4671acdb2f WatchSource:0}: Error finding container 2e223fa7b8bb4e36637b63a78a88b87b882389eb493ec3419a5aec4671acdb2f: Status 404 returned error can't find the container with id 2e223fa7b8bb4e36637b63a78a88b87b882389eb493ec3419a5aec4671acdb2f
	Jan 10 10:01:45 old-k8s-version-729486 kubelet[1387]: W0110 10:01:45.935714    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-b60e29f4d57cb4fc94ae20bd064286b24052360bd01cd95cc5e94fda049ecfcc WatchSource:0}: Error finding container b60e29f4d57cb4fc94ae20bd064286b24052360bd01cd95cc5e94fda049ecfcc: Status 404 returned error can't find the container with id b60e29f4d57cb4fc94ae20bd064286b24052360bd01cd95cc5e94fda049ecfcc
	Jan 10 10:01:46 old-k8s-version-729486 kubelet[1387]: I0110 10:01:46.276382    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.276326903 podCreationTimestamp="2026-01-10 10:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:01:46.261046279 +0000 UTC m=+27.511441685" watchObservedRunningTime="2026-01-10 10:01:46.276326903 +0000 UTC m=+27.526722285"
	Jan 10 10:01:48 old-k8s-version-729486 kubelet[1387]: I0110 10:01:48.404185    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xsgtg" podStartSLOduration=17.404145974 podCreationTimestamp="2026-01-10 10:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:01:46.277454679 +0000 UTC m=+27.527850069" watchObservedRunningTime="2026-01-10 10:01:48.404145974 +0000 UTC m=+29.654541364"
	Jan 10 10:01:48 old-k8s-version-729486 kubelet[1387]: I0110 10:01:48.404326    1387 topology_manager.go:215] "Topology Admit Handler" podUID="9cfed1f7-4d02-4c7d-acf4-33d7165fff27" podNamespace="default" podName="busybox"
	Jan 10 10:01:48 old-k8s-version-729486 kubelet[1387]: I0110 10:01:48.518010    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8gp9\" (UniqueName: \"kubernetes.io/projected/9cfed1f7-4d02-4c7d-acf4-33d7165fff27-kube-api-access-v8gp9\") pod \"busybox\" (UID: \"9cfed1f7-4d02-4c7d-acf4-33d7165fff27\") " pod="default/busybox"
	
	
	==> storage-provisioner [f6db3a215cc1f072cd18dbffe24f2514c5b27854ff611b9094863be8171aa729] <==
	I0110 10:01:45.968035       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:01:45.987996       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:01:45.988153       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 10:01:46.016876       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:01:46.021997       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-729486_0e8f2f1b-2044-49b6-a0b1-df311e14893e!
	I0110 10:01:46.024646       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b364ec7-3081-49c8-b8f1-66ca586b914b", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-729486_0e8f2f1b-2044-49b6-a0b1-df311e14893e became leader
	I0110 10:01:46.122197       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-729486_0e8f2f1b-2044-49b6-a0b1-df311e14893e!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-729486 -n old-k8s-version-729486
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-729486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-729486 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-729486 --alsologtostderr -v=1: exit status 80 (1.789780704s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-729486 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 10:03:18.332609  504427 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:03:18.332778  504427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:03:18.332808  504427 out.go:374] Setting ErrFile to fd 2...
	I0110 10:03:18.332830  504427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:03:18.333129  504427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:03:18.333415  504427 out.go:368] Setting JSON to false
	I0110 10:03:18.333469  504427 mustload.go:66] Loading cluster: old-k8s-version-729486
	I0110 10:03:18.333867  504427 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:03:18.334352  504427 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:03:18.351599  504427 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:03:18.351935  504427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:03:18.417238  504427 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 10:03:18.408131172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:03:18.417877  504427 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-729486 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 10:03:18.424695  504427 out.go:179] * Pausing node old-k8s-version-729486 ... 
	I0110 10:03:18.427499  504427 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:03:18.427848  504427 ssh_runner.go:195] Run: systemctl --version
	I0110 10:03:18.427901  504427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:03:18.444590  504427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:03:18.547172  504427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:03:18.562629  504427 pause.go:52] kubelet running: true
	I0110 10:03:18.562723  504427 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:03:18.859951  504427 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:03:18.860049  504427 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:03:18.934893  504427 cri.go:96] found id: "18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8"
	I0110 10:03:18.934914  504427 cri.go:96] found id: "b5741a9b10c5f413ff081cf53038322a13bc68558e9bcb48ec9f693161763914"
	I0110 10:03:18.934920  504427 cri.go:96] found id: "cfe9bbe8014e3d63ffcc2a2b208a8181dc00308bdee332d52426fe84c746f58c"
	I0110 10:03:18.934924  504427 cri.go:96] found id: "e840a9a6d843f4f94134f005d142bc77765ec34f5d780777c800b3831d78be18"
	I0110 10:03:18.934932  504427 cri.go:96] found id: "113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2"
	I0110 10:03:18.934936  504427 cri.go:96] found id: "5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676"
	I0110 10:03:18.934939  504427 cri.go:96] found id: "c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830"
	I0110 10:03:18.934944  504427 cri.go:96] found id: "4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac"
	I0110 10:03:18.934947  504427 cri.go:96] found id: "b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09"
	I0110 10:03:18.934953  504427 cri.go:96] found id: "2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	I0110 10:03:18.934956  504427 cri.go:96] found id: "44b2462370e7204654417d02b3c6a94563343ab46fe0617bebb08e76506c8f1b"
	I0110 10:03:18.934959  504427 cri.go:96] found id: ""
	I0110 10:03:18.935009  504427 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:03:18.945988  504427 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:03:18Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:03:19.116443  504427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:03:19.129707  504427 pause.go:52] kubelet running: false
	I0110 10:03:19.129818  504427 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:03:19.311025  504427 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:03:19.311127  504427 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:03:19.385956  504427 cri.go:96] found id: "18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8"
	I0110 10:03:19.385986  504427 cri.go:96] found id: "b5741a9b10c5f413ff081cf53038322a13bc68558e9bcb48ec9f693161763914"
	I0110 10:03:19.385993  504427 cri.go:96] found id: "cfe9bbe8014e3d63ffcc2a2b208a8181dc00308bdee332d52426fe84c746f58c"
	I0110 10:03:19.385997  504427 cri.go:96] found id: "e840a9a6d843f4f94134f005d142bc77765ec34f5d780777c800b3831d78be18"
	I0110 10:03:19.386000  504427 cri.go:96] found id: "113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2"
	I0110 10:03:19.386007  504427 cri.go:96] found id: "5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676"
	I0110 10:03:19.386010  504427 cri.go:96] found id: "c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830"
	I0110 10:03:19.386013  504427 cri.go:96] found id: "4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac"
	I0110 10:03:19.386016  504427 cri.go:96] found id: "b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09"
	I0110 10:03:19.386028  504427 cri.go:96] found id: "2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	I0110 10:03:19.386036  504427 cri.go:96] found id: "44b2462370e7204654417d02b3c6a94563343ab46fe0617bebb08e76506c8f1b"
	I0110 10:03:19.386039  504427 cri.go:96] found id: ""
	I0110 10:03:19.386101  504427 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:03:19.762588  504427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:03:19.775372  504427 pause.go:52] kubelet running: false
	I0110 10:03:19.775437  504427 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:03:19.958615  504427 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:03:19.958707  504427 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:03:20.036313  504427 cri.go:96] found id: "18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8"
	I0110 10:03:20.036340  504427 cri.go:96] found id: "b5741a9b10c5f413ff081cf53038322a13bc68558e9bcb48ec9f693161763914"
	I0110 10:03:20.036346  504427 cri.go:96] found id: "cfe9bbe8014e3d63ffcc2a2b208a8181dc00308bdee332d52426fe84c746f58c"
	I0110 10:03:20.036368  504427 cri.go:96] found id: "e840a9a6d843f4f94134f005d142bc77765ec34f5d780777c800b3831d78be18"
	I0110 10:03:20.036372  504427 cri.go:96] found id: "113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2"
	I0110 10:03:20.036377  504427 cri.go:96] found id: "5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676"
	I0110 10:03:20.036380  504427 cri.go:96] found id: "c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830"
	I0110 10:03:20.036383  504427 cri.go:96] found id: "4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac"
	I0110 10:03:20.036391  504427 cri.go:96] found id: "b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09"
	I0110 10:03:20.036401  504427 cri.go:96] found id: "2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	I0110 10:03:20.036407  504427 cri.go:96] found id: "44b2462370e7204654417d02b3c6a94563343ab46fe0617bebb08e76506c8f1b"
	I0110 10:03:20.036410  504427 cri.go:96] found id: ""
	I0110 10:03:20.036465  504427 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:03:20.053402  504427 out.go:203] 
	W0110 10:03:20.056403  504427 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:03:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:03:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 10:03:20.056428  504427 out.go:285] * 
	* 
	W0110 10:03:20.060898  504427 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:03:20.064328  504427 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-729486 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-729486
helpers_test.go:244: (dbg) docker inspect old-k8s-version-729486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a",
	        "Created": "2026-01-10T10:00:53.623819553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:02:11.67873097Z",
	            "FinishedAt": "2026-01-10T10:02:10.864770503Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/hosts",
	        "LogPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a-json.log",
	        "Name": "/old-k8s-version-729486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-729486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-729486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a",
	                "LowerDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-729486",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-729486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-729486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-729486",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-729486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c02d466327c7bb6916d1250208bae3879dd6e4a0477f53bef9eb515dc15eae8",
	            "SandboxKey": "/var/run/docker/netns/0c02d466327c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-729486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:6b:91:74:c7:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2fc70c7426464ff9890052e4156c669ae44556450aab6cdc6b7787e2fd7c393f",
	                    "EndpointID": "3f39f5e07d3706d99ac87300c49a2a82a893a3d7583a2e735e0a1f2e7e6cb867",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-729486",
	                        "e3db4a48fc4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486: exit status 2 (385.531554ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-729486 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-729486 logs -n 25: (1.302584056s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-255897 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo containerd config dump                                                                                                                                                                                                  │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo crio config                                                                                                                                                                                                             │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ delete  │ -p cilium-255897                                                                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:02:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:02:11.398974  501605 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:02:11.399162  501605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:02:11.399191  501605 out.go:374] Setting ErrFile to fd 2...
	I0110 10:02:11.399212  501605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:02:11.399478  501605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:02:11.399871  501605 out.go:368] Setting JSON to false
	I0110 10:02:11.400809  501605 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9881,"bootTime":1768029451,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:02:11.400912  501605 start.go:143] virtualization:  
	I0110 10:02:11.406072  501605 out.go:179] * [old-k8s-version-729486] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:02:11.409096  501605 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:02:11.409144  501605 notify.go:221] Checking for updates...
	I0110 10:02:11.414984  501605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:02:11.417954  501605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:02:11.420820  501605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:02:11.423721  501605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:02:11.426690  501605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:02:11.430045  501605 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:02:11.433357  501605 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 10:02:11.436130  501605 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:02:11.463909  501605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:02:11.464031  501605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:02:11.531606  501605 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:02:11.521934243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:02:11.531728  501605 docker.go:319] overlay module found
	I0110 10:02:11.536657  501605 out.go:179] * Using the docker driver based on existing profile
	I0110 10:02:11.539508  501605 start.go:309] selected driver: docker
	I0110 10:02:11.539528  501605 start.go:928] validating driver "docker" against &{Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:02:11.539636  501605 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:02:11.540364  501605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:02:11.594341  501605 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:02:11.584959251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:02:11.594672  501605 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:02:11.594710  501605 cni.go:84] Creating CNI manager for ""
	I0110 10:02:11.594769  501605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:02:11.594813  501605 start.go:353] cluster config:
	{Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:02:11.598055  501605 out.go:179] * Starting "old-k8s-version-729486" primary control-plane node in "old-k8s-version-729486" cluster
	I0110 10:02:11.600950  501605 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:02:11.603924  501605 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:02:11.606828  501605 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:02:11.606879  501605 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:02:11.606901  501605 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:02:11.606904  501605 cache.go:65] Caching tarball of preloaded images
	I0110 10:02:11.607031  501605 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:02:11.607042  501605 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 10:02:11.607145  501605 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json ...
	I0110 10:02:11.626424  501605 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:02:11.626446  501605 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:02:11.626462  501605 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:02:11.626492  501605 start.go:360] acquireMachinesLock for old-k8s-version-729486: {Name:mk0f30d4f7ea165498ccd896959105635842f094 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:02:11.626551  501605 start.go:364] duration metric: took 37.638µs to acquireMachinesLock for "old-k8s-version-729486"
	I0110 10:02:11.626580  501605 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:02:11.626589  501605 fix.go:54] fixHost starting: 
	I0110 10:02:11.626851  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:11.643170  501605 fix.go:112] recreateIfNeeded on old-k8s-version-729486: state=Stopped err=<nil>
	W0110 10:02:11.643206  501605 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 10:02:11.646478  501605 out.go:252] * Restarting existing docker container for "old-k8s-version-729486" ...
	I0110 10:02:11.646579  501605 cli_runner.go:164] Run: docker start old-k8s-version-729486
	I0110 10:02:11.880789  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:11.904439  501605 kic.go:430] container "old-k8s-version-729486" state is running.
	I0110 10:02:11.904900  501605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:02:11.932007  501605 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json ...
	I0110 10:02:11.932247  501605 machine.go:94] provisionDockerMachine start ...
	I0110 10:02:11.932315  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:11.964813  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:11.965143  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:11.965159  501605 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:02:11.965801  501605 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:02:15.128574  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-729486
	
	I0110 10:02:15.128602  501605 ubuntu.go:182] provisioning hostname "old-k8s-version-729486"
	I0110 10:02:15.128740  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.147621  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:15.147936  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:15.147956  501605 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-729486 && echo "old-k8s-version-729486" | sudo tee /etc/hostname
	I0110 10:02:15.306065  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-729486
	
	I0110 10:02:15.306189  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.324176  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:15.324487  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:15.324579  501605 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-729486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-729486/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-729486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:02:15.476780  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:02:15.476809  501605 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:02:15.476832  501605 ubuntu.go:190] setting up certificates
	I0110 10:02:15.476842  501605 provision.go:84] configureAuth start
	I0110 10:02:15.476916  501605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:02:15.493414  501605 provision.go:143] copyHostCerts
	I0110 10:02:15.493499  501605 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:02:15.493524  501605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:02:15.493606  501605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:02:15.493716  501605 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:02:15.493727  501605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:02:15.493754  501605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:02:15.493860  501605 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:02:15.493871  501605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:02:15.493898  501605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:02:15.493950  501605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-729486 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-729486]
	I0110 10:02:15.710865  501605 provision.go:177] copyRemoteCerts
	I0110 10:02:15.710991  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:02:15.711076  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.729525  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:15.832977  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:02:15.851751  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 10:02:15.869401  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:02:15.887536  501605 provision.go:87] duration metric: took 410.663242ms to configureAuth
	I0110 10:02:15.887576  501605 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:02:15.887786  501605 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:02:15.887901  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.908720  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:15.909031  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:15.909051  501605 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:02:16.266866  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:02:16.266890  501605 machine.go:97] duration metric: took 4.334626795s to provisionDockerMachine
	I0110 10:02:16.266902  501605 start.go:293] postStartSetup for "old-k8s-version-729486" (driver="docker")
	I0110 10:02:16.266923  501605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:02:16.267000  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:02:16.267055  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.288867  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.392324  501605 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:02:16.395734  501605 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:02:16.395764  501605 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:02:16.395778  501605 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:02:16.395835  501605 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:02:16.395920  501605 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:02:16.396024  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:02:16.409018  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:02:16.430148  501605 start.go:296] duration metric: took 163.221695ms for postStartSetup
	I0110 10:02:16.430231  501605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:02:16.430293  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.451116  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.553803  501605 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:02:16.558626  501605 fix.go:56] duration metric: took 4.932028845s for fixHost
	I0110 10:02:16.558656  501605 start.go:83] releasing machines lock for "old-k8s-version-729486", held for 4.93209054s
	I0110 10:02:16.558727  501605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:02:16.576575  501605 ssh_runner.go:195] Run: cat /version.json
	I0110 10:02:16.576590  501605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:02:16.576634  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.576657  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.593723  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.602125  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.794981  501605 ssh_runner.go:195] Run: systemctl --version
	I0110 10:02:16.801511  501605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:02:16.838059  501605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:02:16.842721  501605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:02:16.842823  501605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:02:16.850868  501605 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:02:16.850893  501605 start.go:496] detecting cgroup driver to use...
	I0110 10:02:16.850942  501605 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:02:16.850997  501605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:02:16.866457  501605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:02:16.879639  501605 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:02:16.879708  501605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:02:16.895691  501605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:02:16.908884  501605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:02:17.022230  501605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:02:17.156133  501605 docker.go:234] disabling docker service ...
	I0110 10:02:17.156203  501605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:02:17.172836  501605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:02:17.186287  501605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:02:17.307588  501605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:02:17.441504  501605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:02:17.454576  501605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:02:17.470483  501605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 10:02:17.470591  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.479584  501605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:02:17.479673  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.488948  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.498151  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.507740  501605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:02:17.516330  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.525912  501605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.534746  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.543671  501605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:02:17.551328  501605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:02:17.558665  501605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:02:17.679630  501605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:02:17.862865  501605 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:02:17.862937  501605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:02:17.866766  501605 start.go:574] Will wait 60s for crictl version
	I0110 10:02:17.866906  501605 ssh_runner.go:195] Run: which crictl
	I0110 10:02:17.870345  501605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:02:17.895286  501605 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:02:17.895370  501605 ssh_runner.go:195] Run: crio --version
	I0110 10:02:17.930357  501605 ssh_runner.go:195] Run: crio --version
	I0110 10:02:17.968251  501605 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0110 10:02:17.970998  501605 cli_runner.go:164] Run: docker network inspect old-k8s-version-729486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:02:17.987450  501605 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:02:17.991110  501605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:02:18.000740  501605 kubeadm.go:884] updating cluster {Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:02:18.000860  501605 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:02:18.000919  501605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:02:18.041557  501605 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:02:18.041578  501605 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:02:18.041640  501605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:02:18.071063  501605 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:02:18.071127  501605 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:02:18.071151  501605 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I0110 10:02:18.071282  501605 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-729486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:02:18.071387  501605 ssh_runner.go:195] Run: crio config
	I0110 10:02:18.146892  501605 cni.go:84] Creating CNI manager for ""
	I0110 10:02:18.146925  501605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:02:18.147007  501605 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:02:18.147065  501605 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-729486 NodeName:old-k8s-version-729486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:02:18.147299  501605 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-729486"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:02:18.147467  501605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0110 10:02:18.155939  501605 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:02:18.156037  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:02:18.163856  501605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0110 10:02:18.177243  501605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:02:18.190067  501605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0110 10:02:18.203425  501605 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:02:18.206991  501605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:02:18.216462  501605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:02:18.335653  501605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:02:18.354453  501605 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486 for IP: 192.168.76.2
	I0110 10:02:18.354530  501605 certs.go:195] generating shared ca certs ...
	I0110 10:02:18.354562  501605 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:18.354746  501605 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:02:18.354820  501605 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:02:18.354855  501605 certs.go:257] generating profile certs ...
	I0110 10:02:18.354965  501605 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.key
	I0110 10:02:18.355059  501605 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key.3e623c7c
	I0110 10:02:18.355137  501605 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key
	I0110 10:02:18.355274  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:02:18.355336  501605 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:02:18.355369  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:02:18.355424  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:02:18.355480  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:02:18.355528  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:02:18.355608  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:02:18.356283  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:02:18.382398  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:02:18.400130  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:02:18.418777  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:02:18.442752  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0110 10:02:18.461446  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:02:18.482319  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:02:18.502712  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:02:18.526793  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:02:18.554314  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:02:18.577367  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:02:18.598933  501605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:02:18.612332  501605 ssh_runner.go:195] Run: openssl version
	I0110 10:02:18.618645  501605 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.626915  501605 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:02:18.638046  501605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.642525  501605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.642613  501605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.685587  501605 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:02:18.694364  501605 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.702896  501605 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:02:18.714785  501605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.718586  501605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.718697  501605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.759559  501605 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:02:18.766937  501605 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.774098  501605 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:02:18.781295  501605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.784812  501605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.784915  501605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.825991  501605 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:02:18.833229  501605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:02:18.836817  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:02:18.877482  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:02:18.918772  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:02:18.960022  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:02:19.019425  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:02:19.083096  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:02:19.160272  501605 kubeadm.go:401] StartCluster: {Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:02:19.160379  501605 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:02:19.160488  501605 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:02:19.213940  501605 cri.go:96] found id: "5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676"
	I0110 10:02:19.213968  501605 cri.go:96] found id: "c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830"
	I0110 10:02:19.213978  501605 cri.go:96] found id: "4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac"
	I0110 10:02:19.214000  501605 cri.go:96] found id: "b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09"
	I0110 10:02:19.214013  501605 cri.go:96] found id: ""
	I0110 10:02:19.214083  501605 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:02:19.230742  501605 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:02:19Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:02:19.230848  501605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:02:19.241659  501605 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:02:19.241681  501605 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:02:19.241763  501605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:02:19.251215  501605 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:02:19.251684  501605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-729486" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:02:19.251826  501605 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-729486" cluster setting kubeconfig missing "old-k8s-version-729486" context setting]
	I0110 10:02:19.252203  501605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:19.253851  501605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:02:19.265556  501605 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:02:19.265602  501605 kubeadm.go:602] duration metric: took 23.914802ms to restartPrimaryControlPlane
	I0110 10:02:19.265629  501605 kubeadm.go:403] duration metric: took 105.368044ms to StartCluster
	I0110 10:02:19.265653  501605 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:19.265751  501605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:02:19.266412  501605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:19.266662  501605 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:02:19.267048  501605 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:02:19.267118  501605 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:02:19.267264  501605 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-729486"
	I0110 10:02:19.267297  501605 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-729486"
	W0110 10:02:19.267309  501605 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:02:19.267334  501605 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:02:19.268199  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.268405  501605 addons.go:70] Setting dashboard=true in profile "old-k8s-version-729486"
	I0110 10:02:19.268425  501605 addons.go:239] Setting addon dashboard=true in "old-k8s-version-729486"
	W0110 10:02:19.268444  501605 addons.go:248] addon dashboard should already be in state true
	I0110 10:02:19.268481  501605 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:02:19.268833  501605 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-729486"
	I0110 10:02:19.268856  501605 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-729486"
	I0110 10:02:19.269128  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.269132  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.273134  501605 out.go:179] * Verifying Kubernetes components...
	I0110 10:02:19.276765  501605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:02:19.320318  501605 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:02:19.324548  501605 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:02:19.324570  501605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:02:19.324624  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:19.325256  501605 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-729486"
	W0110 10:02:19.325271  501605 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:02:19.325294  501605 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:02:19.325802  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.335474  501605 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:02:19.340668  501605 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:02:19.343781  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:02:19.343807  501605 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:02:19.343878  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:19.386026  501605 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:02:19.386046  501605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:02:19.386109  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:19.401926  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:19.436733  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:19.444823  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:19.645588  501605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:02:19.671085  501605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:02:19.758333  501605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:02:19.786840  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:02:19.786913  501605 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:02:19.852542  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:02:19.852616  501605 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:02:19.889470  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:02:19.889540  501605 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:02:19.942247  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:02:19.942315  501605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:02:20.002331  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:02:20.002411  501605 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:02:20.034514  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:02:20.034596  501605 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:02:20.069012  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:02:20.069091  501605 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:02:20.098046  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:02:20.098110  501605 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:02:20.117787  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:02:20.117863  501605 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:02:20.146882  501605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:02:21.842360  490351 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:02:21.842394  490351 kubeadm.go:319] 
	I0110 10:02:21.842516  490351 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:02:21.848886  490351 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:02:21.849075  490351 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:02:21.849219  490351 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:02:21.852606  490351 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:02:21.852695  490351 kubeadm.go:319] OS: Linux
	I0110 10:02:21.852754  490351 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:02:21.852807  490351 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:02:21.852857  490351 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:02:21.852908  490351 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:02:21.852959  490351 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:02:21.853011  490351 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:02:21.853060  490351 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:02:21.853110  490351 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:02:21.853159  490351 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:02:21.853236  490351 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:02:21.853338  490351 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:02:21.853434  490351 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:02:21.853501  490351 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:02:21.856776  490351 out.go:252]   - Generating certificates and keys ...
	I0110 10:02:21.856870  490351 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:02:21.856939  490351 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:02:21.857011  490351 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:02:21.857072  490351 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:02:21.857137  490351 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:02:21.857190  490351 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:02:21.857247  490351 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:02:21.857386  490351 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:02:21.857442  490351 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:02:21.857577  490351 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:02:21.857647  490351 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:02:21.857714  490351 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:02:21.857762  490351 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:02:21.857821  490351 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:02:21.857876  490351 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:02:21.857941  490351 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:02:21.857999  490351 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:02:21.858066  490351 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:02:21.858125  490351 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:02:21.858212  490351 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:02:21.858282  490351 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:02:21.861182  490351 out.go:252]   - Booting up control plane ...
	I0110 10:02:21.861340  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:02:21.861473  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:02:21.861592  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:02:21.861750  490351 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:02:21.861895  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:02:21.862059  490351 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:02:21.862188  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:02:21.862264  490351 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:02:21.862452  490351 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:02:21.862605  490351 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:02:21.862748  490351 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001342949s
	I0110 10:02:21.862806  490351 kubeadm.go:319] 
	I0110 10:02:21.862878  490351 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:02:21.862914  490351 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:02:21.863025  490351 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:02:21.863030  490351 kubeadm.go:319] 
	I0110 10:02:21.863142  490351 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:02:21.863176  490351 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:02:21.863208  490351 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:02:21.863213  490351 kubeadm.go:319] 
	W0110 10:02:21.863327  490351 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001342949s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 10:02:21.863399  490351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 10:02:22.326865  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:02:22.351207  490351 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:02:22.351267  490351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:02:22.364009  490351 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:02:22.364028  490351 kubeadm.go:158] found existing configuration files:
	
	I0110 10:02:22.364080  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:02:22.377483  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:02:22.377599  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:02:22.387938  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:02:22.399386  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:02:22.399500  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:02:22.407042  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:02:22.421172  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:02:22.421285  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:02:22.432909  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:02:22.444516  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:02:22.444637  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:02:22.457744  490351 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:02:22.529233  490351 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:02:22.529354  490351 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:02:22.663263  490351 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:02:22.663411  490351 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:02:22.663492  490351 kubeadm.go:319] OS: Linux
	I0110 10:02:22.663563  490351 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:02:22.663670  490351 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:02:22.663761  490351 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:02:22.663841  490351 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:02:22.663923  490351 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:02:22.663995  490351 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:02:22.664045  490351 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:02:22.664096  490351 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:02:22.664152  490351 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:02:22.789192  490351 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:02:22.789359  490351 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:02:22.789481  490351 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:02:22.807386  490351 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:02:22.812689  490351 out.go:252]   - Generating certificates and keys ...
	I0110 10:02:22.812850  490351 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:02:22.812970  490351 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:02:22.813528  490351 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 10:02:22.814201  490351 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 10:02:22.814820  490351 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 10:02:22.821187  490351 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 10:02:22.822578  490351 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 10:02:22.832829  490351 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 10:02:22.832919  490351 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 10:02:22.832992  490351 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 10:02:22.833030  490351 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 10:02:22.833085  490351 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:02:23.192856  490351 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:02:23.508049  490351 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:02:23.719185  490351 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:02:24.061850  490351 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:02:24.248896  490351 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:02:24.248995  490351 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:02:24.249547  490351 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:02:26.351439  501605 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.680266304s)
	I0110 10:02:26.351499  501605 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-729486" to be "Ready" ...
	I0110 10:02:26.351802  501605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.593405165s)
	I0110 10:02:26.352887  501605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.707228235s)
	I0110 10:02:26.385846  501605 node_ready.go:49] node "old-k8s-version-729486" is "Ready"
	I0110 10:02:26.385881  501605 node_ready.go:38] duration metric: took 34.368098ms for node "old-k8s-version-729486" to be "Ready" ...
	I0110 10:02:26.385926  501605 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:02:26.386023  501605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:02:26.967653  501605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.82067345s)
	I0110 10:02:26.967852  501605 api_server.go:72] duration metric: took 7.701145531s to wait for apiserver process to appear ...
	I0110 10:02:26.967903  501605 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:02:26.967938  501605 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:02:26.970869  501605 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-729486 addons enable metrics-server
	
	I0110 10:02:26.973776  501605 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0110 10:02:24.252594  490351 out.go:252]   - Booting up control plane ...
	I0110 10:02:24.252697  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:02:24.252776  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:02:24.253741  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:02:24.281294  490351 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:02:24.281402  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:02:24.289897  490351 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:02:24.289997  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:02:24.290037  490351 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:02:24.511360  490351 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:02:24.511481  490351 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:02:26.976834  501605 addons.go:530] duration metric: took 7.709707947s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0110 10:02:26.980881  501605 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:02:26.982673  501605 api_server.go:141] control plane version: v1.28.0
	I0110 10:02:26.982707  501605 api_server.go:131] duration metric: took 14.791189ms to wait for apiserver health ...
	I0110 10:02:26.982725  501605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:02:26.988011  501605 system_pods.go:59] 8 kube-system pods found
	I0110 10:02:26.988059  501605 system_pods.go:61] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:02:26.988069  501605 system_pods.go:61] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:02:26.988076  501605 system_pods.go:61] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:02:26.988083  501605 system_pods.go:61] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:02:26.988091  501605 system_pods.go:61] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:02:26.988207  501605 system_pods.go:61] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:02:26.988227  501605 system_pods.go:61] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:02:26.988232  501605 system_pods.go:61] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Running
	I0110 10:02:26.988245  501605 system_pods.go:74] duration metric: took 5.496539ms to wait for pod list to return data ...
	I0110 10:02:26.988258  501605 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:02:26.995074  501605 default_sa.go:45] found service account: "default"
	I0110 10:02:26.995102  501605 default_sa.go:55] duration metric: took 6.829175ms for default service account to be created ...
	I0110 10:02:26.995117  501605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:02:27.004027  501605 system_pods.go:86] 8 kube-system pods found
	I0110 10:02:27.004147  501605 system_pods.go:89] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:02:27.004195  501605 system_pods.go:89] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:02:27.004222  501605 system_pods.go:89] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:02:27.004249  501605 system_pods.go:89] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:02:27.004275  501605 system_pods.go:89] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:02:27.004310  501605 system_pods.go:89] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:02:27.004342  501605 system_pods.go:89] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:02:27.004365  501605 system_pods.go:89] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Running
	I0110 10:02:27.004390  501605 system_pods.go:126] duration metric: took 9.266417ms to wait for k8s-apps to be running ...
	I0110 10:02:27.004423  501605 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:02:27.004564  501605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:02:27.026886  501605 system_svc.go:56] duration metric: took 22.449725ms WaitForService to wait for kubelet
	I0110 10:02:27.026941  501605 kubeadm.go:587] duration metric: took 7.760238337s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:02:27.026985  501605 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:02:27.032023  501605 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:02:27.032100  501605 node_conditions.go:123] node cpu capacity is 2
	I0110 10:02:27.032170  501605 node_conditions.go:105] duration metric: took 5.171619ms to run NodePressure ...
	I0110 10:02:27.032198  501605 start.go:242] waiting for startup goroutines ...
	I0110 10:02:27.032219  501605 start.go:247] waiting for cluster config update ...
	I0110 10:02:27.032257  501605 start.go:256] writing updated cluster config ...
	I0110 10:02:27.032619  501605 ssh_runner.go:195] Run: rm -f paused
	I0110 10:02:27.036704  501605 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:02:27.042248  501605 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xsgtg" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 10:02:29.048416  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:31.048784  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:33.048943  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:35.547867  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:37.548770  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:40.055674  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:42.548625  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:44.548816  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:47.049850  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:49.547825  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:51.548570  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:54.048054  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:56.048797  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:58.548783  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:03:01.047979  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:03:03.048462  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	I0110 10:03:05.048305  501605 pod_ready.go:94] pod "coredns-5dd5756b68-xsgtg" is "Ready"
	I0110 10:03:05.048335  501605 pod_ready.go:86] duration metric: took 38.006059001s for pod "coredns-5dd5756b68-xsgtg" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.051700  501605 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.056664  501605 pod_ready.go:94] pod "etcd-old-k8s-version-729486" is "Ready"
	I0110 10:03:05.056693  501605 pod_ready.go:86] duration metric: took 4.964814ms for pod "etcd-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.059428  501605 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.064145  501605 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-729486" is "Ready"
	I0110 10:03:05.064188  501605 pod_ready.go:86] duration metric: took 4.734601ms for pod "kube-apiserver-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.067016  501605 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.246730  501605 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-729486" is "Ready"
	I0110 10:03:05.246761  501605 pod_ready.go:86] duration metric: took 179.721102ms for pod "kube-controller-manager-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.446457  501605 pod_ready.go:83] waiting for pod "kube-proxy-szwsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.845672  501605 pod_ready.go:94] pod "kube-proxy-szwsd" is "Ready"
	I0110 10:03:05.845703  501605 pod_ready.go:86] duration metric: took 399.21824ms for pod "kube-proxy-szwsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:06.046721  501605 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:06.446662  501605 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-729486" is "Ready"
	I0110 10:03:06.446707  501605 pod_ready.go:86] duration metric: took 399.95969ms for pod "kube-scheduler-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:06.446721  501605 pod_ready.go:40] duration metric: took 39.409936342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:03:06.502367  501605 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0110 10:03:06.505711  501605 out.go:203] 
	W0110 10:03:06.508539  501605 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 10:03:06.511688  501605 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:03:06.514628  501605 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-729486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:02:56 old-k8s-version-729486 crio[662]: time="2026-01-10T10:02:56.760652336Z" level=info msg="Started container" PID=1680 containerID=18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8 description=kube-system/storage-provisioner/storage-provisioner id=7be32f43-fe72-4495-9195-9ed3fabc64aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ddf837f90f1058dff48c85e749c3d14e6092170e419892224abcebc6549bf3c
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.535294409Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8a505654-bb65-448b-aba7-f8fdba8bb09e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.536549236Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=711509c5-c038-4f53-b8d4-988584943524 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.537645184Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper" id=7aa81463-6729-4593-8d26-9f2065c0dce3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.537782318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.551420541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.552112578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.566644285Z" level=info msg="Created container 2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper" id=7aa81463-6729-4593-8d26-9f2065c0dce3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.567438338Z" level=info msg="Starting container: 2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337" id=19ed699c-9fbc-41ca-9135-664c00f37a53 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.569451254Z" level=info msg="Started container" PID=1692 containerID=2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper id=19ed699c-9fbc-41ca-9135-664c00f37a53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101
	Jan 10 10:03:01 old-k8s-version-729486 conmon[1690]: conmon 2e580756364032f5d0f9 <ninfo>: container 1692 exited with status 1
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.744188769Z" level=info msg="Removing container: f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0" id=5ecc2b79-91e5-4aef-b3d6-7ecfbf74655f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.756298652Z" level=info msg="Error loading conmon cgroup of container f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0: cgroup deleted" id=5ecc2b79-91e5-4aef-b3d6-7ecfbf74655f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.762167019Z" level=info msg="Removed container f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper" id=5ecc2b79-91e5-4aef-b3d6-7ecfbf74655f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.436877612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.436914445Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.441402889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.441438401Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.454532468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.454789653Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.460856283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.46089429Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.460927267Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.467275623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.467309904Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2e58075636403       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   b0528f76aab8d       dashboard-metrics-scraper-5f989dc9cf-jt445       kubernetes-dashboard
	18fec54aabaa1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   8ddf837f90f10       storage-provisioner                              kube-system
	44b2462370e72       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   58ddd93997c99       kubernetes-dashboard-8694d4445c-c5xh5            kubernetes-dashboard
	1fa19c6840d4b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   49a998d6d8477       busybox                                          default
	b5741a9b10c5f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   5da831b10b8ca       coredns-5dd5756b68-xsgtg                         kube-system
	cfe9bbe8014e3       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   6e000f0412a81       kindnet-mcvws                                    kube-system
	e840a9a6d843f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   9f2b68caac326       kube-proxy-szwsd                                 kube-system
	113f2c97bb2d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   8ddf837f90f10       storage-provisioner                              kube-system
	5cc3bd4bc4c1f       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   cdd626251d4b2       kube-controller-manager-old-k8s-version-729486   kube-system
	c0a4eb50e2c15       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   83350cee52e51       etcd-old-k8s-version-729486                      kube-system
	4129c584728a1       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   31b8f9f305bfd       kube-apiserver-old-k8s-version-729486            kube-system
	b8d4be0f660bd       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   14a80284559eb       kube-scheduler-old-k8s-version-729486            kube-system
	
	
	==> coredns [b5741a9b10c5f413ff081cf53038322a13bc68558e9bcb48ec9f693161763914] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53525 - 20486 "HINFO IN 4725109551558841005.285351232193025376. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035321013s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-729486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-729486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=old-k8s-version-729486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_01_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:01:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-729486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:03:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-729486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                6835df29-9649-49d8-a5dc-2264bb66093f
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-xsgtg                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-729486                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-mcvws                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-729486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-729486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-szwsd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-729486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jt445        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c5xh5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-729486 event: Registered Node old-k8s-version-729486 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-729486 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-729486 event: Registered Node old-k8s-version-729486 in Controller
	
	
	==> dmesg <==
	[Jan10 09:29] overlayfs: idmapped layers are currently not supported
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830] <==
	{"level":"info","ts":"2026-01-10T10:02:19.637407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:02:19.637512Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:02:19.637912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:02:19.640925Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2026-01-10T10:02:19.641118Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:02:19.641658Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:02:19.681942Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T10:02:19.68228Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T10:02:19.682347Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T10:02:19.68243Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:02:19.684532Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:02:21.024576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:02:21.024684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:02:21.02474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:02:21.024788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.024819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.024852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.024885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.031065Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-729486 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:02:21.031167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:02:21.032297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:02:21.031225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:02:21.037565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:02:21.04854Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:02:21.052532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:03:21 up  2:45,  0 user,  load average: 1.28, 1.46, 1.92
	Linux old-k8s-version-729486 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cfe9bbe8014e3d63ffcc2a2b208a8181dc00308bdee332d52426fe84c746f58c] <==
	I0110 10:02:26.251986       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:02:26.317032       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:02:26.317172       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:02:26.317192       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:02:26.317204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:02:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:02:26.428814       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:02:26.428832       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:02:26.428850       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:02:26.429654       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:02:56.429713       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:02:56.429717       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 10:02:56.429813       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:02:56.429863       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 10:02:58.029237       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:02:58.029268       1 metrics.go:72] Registering metrics
	I0110 10:02:58.029348       1 controller.go:711] "Syncing nftables rules"
	I0110 10:03:06.429136       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:03:06.429205       1 main.go:301] handling current node
	I0110 10:03:16.433534       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:03:16.433570       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac] <==
	I0110 10:02:24.881191       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 10:02:24.883692       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 10:02:24.883810       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 10:02:24.883825       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 10:02:24.883942       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 10:02:24.891221       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 10:02:24.900629       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 10:02:24.904677       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:02:24.912781       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 10:02:24.930782       1 aggregator.go:166] initial CRD sync complete...
	I0110 10:02:24.930819       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 10:02:24.930827       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:02:24.930837       1 cache.go:39] Caches are synced for autoregister controller
	E0110 10:02:25.001317       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 10:02:25.511803       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 10:02:26.727002       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 10:02:26.776340       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 10:02:26.810481       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:02:26.831416       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:02:26.844909       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 10:02:26.912625       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.64.223"}
	I0110 10:02:26.953727       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.130.167"}
	I0110 10:02:37.335101       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 10:02:37.734463       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 10:02:37.870661       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676] <==
	I0110 10:02:37.744265       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I0110 10:02:37.829853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="497.615969ms"
	I0110 10:02:37.831047       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jt445"
	I0110 10:02:37.831417       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-c5xh5"
	I0110 10:02:37.831363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.806µs"
	I0110 10:02:37.860295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="115.847053ms"
	I0110 10:02:37.864991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="124.159004ms"
	I0110 10:02:37.888362       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 10:02:37.892907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="32.553288ms"
	I0110 10:02:37.893059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.025008ms"
	I0110 10:02:37.895385       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 10:02:37.895475       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 10:02:37.895767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.239µs"
	I0110 10:02:37.895869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.73µs"
	I0110 10:02:37.896538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.148µs"
	I0110 10:02:37.910901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.428µs"
	I0110 10:02:42.702766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.979µs"
	I0110 10:02:43.710280       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.774µs"
	I0110 10:02:44.715867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.169µs"
	I0110 10:02:47.739933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.186284ms"
	I0110 10:02:47.740480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.613µs"
	I0110 10:03:01.759117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.075µs"
	I0110 10:03:04.733250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.967432ms"
	I0110 10:03:04.734353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.41µs"
	I0110 10:03:08.166967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.153µs"
	
	
	==> kube-proxy [e840a9a6d843f4f94134f005d142bc77765ec34f5d780777c800b3831d78be18] <==
	I0110 10:02:26.088675       1 server_others.go:69] "Using iptables proxy"
	I0110 10:02:26.114910       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0110 10:02:26.277698       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:02:26.281481       1 server_others.go:152] "Using iptables Proxier"
	I0110 10:02:26.281519       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 10:02:26.281527       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 10:02:26.281552       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 10:02:26.281900       1 server.go:846] "Version info" version="v1.28.0"
	I0110 10:02:26.281911       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:02:26.282565       1 config.go:188] "Starting service config controller"
	I0110 10:02:26.282588       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 10:02:26.282604       1 config.go:97] "Starting endpoint slice config controller"
	I0110 10:02:26.282607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 10:02:26.283050       1 config.go:315] "Starting node config controller"
	I0110 10:02:26.283056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 10:02:26.382892       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 10:02:26.382967       1 shared_informer.go:318] Caches are synced for service config
	I0110 10:02:26.383106       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09] <==
	I0110 10:02:22.646836       1 serving.go:348] Generated self-signed cert in-memory
	I0110 10:02:25.017036       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0110 10:02:25.017276       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:02:25.025853       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0110 10:02:25.028615       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0110 10:02:25.028711       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0110 10:02:25.028768       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:02:25.028800       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 10:02:25.028837       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0110 10:02:25.028865       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 10:02:25.028713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0110 10:02:25.129601       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 10:02:25.129678       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0110 10:02:25.129769       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 10:02:37 old-k8s-version-729486 kubelet[789]: I0110 10:02:37.881715     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fggl\" (UniqueName: \"kubernetes.io/projected/131040d6-af8c-40cf-8970-f218be5ab7fc-kube-api-access-5fggl\") pod \"kubernetes-dashboard-8694d4445c-c5xh5\" (UID: \"131040d6-af8c-40cf-8970-f218be5ab7fc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5xh5"
	Jan 10 10:02:37 old-k8s-version-729486 kubelet[789]: I0110 10:02:37.881850     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tq6\" (UniqueName: \"kubernetes.io/projected/35fa1614-40e8-49f6-b3e3-7176013da408-kube-api-access-l2tq6\") pod \"dashboard-metrics-scraper-5f989dc9cf-jt445\" (UID: \"35fa1614-40e8-49f6-b3e3-7176013da408\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445"
	Jan 10 10:02:37 old-k8s-version-729486 kubelet[789]: I0110 10:02:37.881959     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/35fa1614-40e8-49f6-b3e3-7176013da408-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jt445\" (UID: \"35fa1614-40e8-49f6-b3e3-7176013da408\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445"
	Jan 10 10:02:38 old-k8s-version-729486 kubelet[789]: W0110 10:02:38.209098     789 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101 WatchSource:0}: Error finding container b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101: Status 404 returned error can't find the container with id b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101
	Jan 10 10:02:38 old-k8s-version-729486 kubelet[789]: W0110 10:02:38.233018     789 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-58ddd93997c99a933aee967f4b72e8e74396e3a22114bb6de24cae6b34e9f2eb WatchSource:0}: Error finding container 58ddd93997c99a933aee967f4b72e8e74396e3a22114bb6de24cae6b34e9f2eb: Status 404 returned error can't find the container with id 58ddd93997c99a933aee967f4b72e8e74396e3a22114bb6de24cae6b34e9f2eb
	Jan 10 10:02:42 old-k8s-version-729486 kubelet[789]: I0110 10:02:42.686609     789 scope.go:117] "RemoveContainer" containerID="45dad10ccbe6bce30734e6480e83839f6b256d4d25b8d9e4e92228440bb45f5a"
	Jan 10 10:02:43 old-k8s-version-729486 kubelet[789]: I0110 10:02:43.690376     789 scope.go:117] "RemoveContainer" containerID="45dad10ccbe6bce30734e6480e83839f6b256d4d25b8d9e4e92228440bb45f5a"
	Jan 10 10:02:43 old-k8s-version-729486 kubelet[789]: I0110 10:02:43.690680     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:02:43 old-k8s-version-729486 kubelet[789]: E0110 10:02:43.690973     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:02:44 old-k8s-version-729486 kubelet[789]: I0110 10:02:44.694378     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:02:44 old-k8s-version-729486 kubelet[789]: E0110 10:02:44.694646     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:02:48 old-k8s-version-729486 kubelet[789]: I0110 10:02:48.147853     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:02:48 old-k8s-version-729486 kubelet[789]: E0110 10:02:48.148209     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:02:56 old-k8s-version-729486 kubelet[789]: I0110 10:02:56.726918     789 scope.go:117] "RemoveContainer" containerID="113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2"
	Jan 10 10:02:56 old-k8s-version-729486 kubelet[789]: I0110 10:02:56.750297     789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5xh5" podStartSLOduration=10.500591096 podCreationTimestamp="2026-01-10 10:02:37 +0000 UTC" firstStartedPulling="2026-01-10 10:02:38.237217091 +0000 UTC m=+19.880407246" lastFinishedPulling="2026-01-10 10:02:47.486862385 +0000 UTC m=+29.130052540" observedRunningTime="2026-01-10 10:02:47.723315165 +0000 UTC m=+29.366505320" watchObservedRunningTime="2026-01-10 10:02:56.75023639 +0000 UTC m=+38.393426537"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: I0110 10:03:01.534638     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: I0110 10:03:01.742037     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: I0110 10:03:01.742254     789 scope.go:117] "RemoveContainer" containerID="2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: E0110 10:03:01.742575     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:03:08 old-k8s-version-729486 kubelet[789]: I0110 10:03:08.148085     789 scope.go:117] "RemoveContainer" containerID="2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	Jan 10 10:03:08 old-k8s-version-729486 kubelet[789]: E0110 10:03:08.148395     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:03:18 old-k8s-version-729486 kubelet[789]: I0110 10:03:18.832181     789 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 10:03:18 old-k8s-version-729486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:03:18 old-k8s-version-729486 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:03:18 old-k8s-version-729486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [44b2462370e7204654417d02b3c6a94563343ab46fe0617bebb08e76506c8f1b] <==
	2026/01/10 10:02:47 Using namespace: kubernetes-dashboard
	2026/01/10 10:02:47 Using in-cluster config to connect to apiserver
	2026/01/10 10:02:47 Using secret token for csrf signing
	2026/01/10 10:02:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:02:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:02:47 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 10:02:47 Generating JWE encryption key
	2026/01/10 10:02:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:02:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:02:47 Initializing JWE encryption key from synchronized object
	2026/01/10 10:02:47 Creating in-cluster Sidecar client
	2026/01/10 10:02:48 Serving insecurely on HTTP port: 9090
	2026/01/10 10:02:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:03:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:02:47 Starting overwatch
	
	
	==> storage-provisioner [113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2] <==
	I0110 10:02:26.136870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:02:56.140293       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8] <==
	I0110 10:02:56.771657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:02:56.786168       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:02:56.786286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 10:03:14.188781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:03:14.189022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-729486_642fc9f3-7553-4a47-996a-d0963eb16563!
	I0110 10:03:14.190456       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b364ec7-3081-49c8-b8f1-66ca586b914b", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-729486_642fc9f3-7553-4a47-996a-d0963eb16563 became leader
	I0110 10:03:14.291540       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-729486_642fc9f3-7553-4a47-996a-d0963eb16563!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-729486 -n old-k8s-version-729486
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-729486 -n old-k8s-version-729486: exit status 2 (351.716244ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-729486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-729486
helpers_test.go:244: (dbg) docker inspect old-k8s-version-729486:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a",
	        "Created": "2026-01-10T10:00:53.623819553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:02:11.67873097Z",
	            "FinishedAt": "2026-01-10T10:02:10.864770503Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/hosts",
	        "LogPath": "/var/lib/docker/containers/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a-json.log",
	        "Name": "/old-k8s-version-729486",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-729486:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-729486",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a",
	                "LowerDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed437eae824056006a26ef22a845b1e0feee5015e66d09783daa5aeda474d641/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-729486",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-729486/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-729486",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-729486",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-729486",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c02d466327c7bb6916d1250208bae3879dd6e4a0477f53bef9eb515dc15eae8",
	            "SandboxKey": "/var/run/docker/netns/0c02d466327c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-729486": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:6b:91:74:c7:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2fc70c7426464ff9890052e4156c669ae44556450aab6cdc6b7787e2fd7c393f",
	                    "EndpointID": "3f39f5e07d3706d99ac87300c49a2a82a893a3d7583a2e735e0a1f2e7e6cb867",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-729486",
	                        "e3db4a48fc4a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486: exit status 2 (349.815182ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-729486 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-729486 logs -n 25: (1.268823146s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-255897 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo containerd config dump                                                                                                                                                                                                  │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo crio config                                                                                                                                                                                                             │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ delete  │ -p cilium-255897                                                                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:02:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:02:11.398974  501605 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:02:11.399162  501605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:02:11.399191  501605 out.go:374] Setting ErrFile to fd 2...
	I0110 10:02:11.399212  501605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:02:11.399478  501605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:02:11.399871  501605 out.go:368] Setting JSON to false
	I0110 10:02:11.400809  501605 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9881,"bootTime":1768029451,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:02:11.400912  501605 start.go:143] virtualization:  
	I0110 10:02:11.406072  501605 out.go:179] * [old-k8s-version-729486] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:02:11.409096  501605 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:02:11.409144  501605 notify.go:221] Checking for updates...
	I0110 10:02:11.414984  501605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:02:11.417954  501605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:02:11.420820  501605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:02:11.423721  501605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:02:11.426690  501605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:02:11.430045  501605 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:02:11.433357  501605 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 10:02:11.436130  501605 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:02:11.463909  501605 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:02:11.464031  501605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:02:11.531606  501605 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:02:11.521934243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:02:11.531728  501605 docker.go:319] overlay module found
	I0110 10:02:11.536657  501605 out.go:179] * Using the docker driver based on existing profile
	I0110 10:02:11.539508  501605 start.go:309] selected driver: docker
	I0110 10:02:11.539528  501605 start.go:928] validating driver "docker" against &{Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:02:11.539636  501605 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:02:11.540364  501605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:02:11.594341  501605 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:02:11.584959251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:02:11.594672  501605 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:02:11.594710  501605 cni.go:84] Creating CNI manager for ""
	I0110 10:02:11.594769  501605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:02:11.594813  501605 start.go:353] cluster config:
	{Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:02:11.598055  501605 out.go:179] * Starting "old-k8s-version-729486" primary control-plane node in "old-k8s-version-729486" cluster
	I0110 10:02:11.600950  501605 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:02:11.603924  501605 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:02:11.606828  501605 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:02:11.606879  501605 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:02:11.606901  501605 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:02:11.606904  501605 cache.go:65] Caching tarball of preloaded images
	I0110 10:02:11.607031  501605 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:02:11.607042  501605 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 10:02:11.607145  501605 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json ...
	I0110 10:02:11.626424  501605 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:02:11.626446  501605 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:02:11.626462  501605 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:02:11.626492  501605 start.go:360] acquireMachinesLock for old-k8s-version-729486: {Name:mk0f30d4f7ea165498ccd896959105635842f094 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:02:11.626551  501605 start.go:364] duration metric: took 37.638µs to acquireMachinesLock for "old-k8s-version-729486"
	I0110 10:02:11.626580  501605 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:02:11.626589  501605 fix.go:54] fixHost starting: 
	I0110 10:02:11.626851  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:11.643170  501605 fix.go:112] recreateIfNeeded on old-k8s-version-729486: state=Stopped err=<nil>
	W0110 10:02:11.643206  501605 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 10:02:11.646478  501605 out.go:252] * Restarting existing docker container for "old-k8s-version-729486" ...
	I0110 10:02:11.646579  501605 cli_runner.go:164] Run: docker start old-k8s-version-729486
	I0110 10:02:11.880789  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:11.904439  501605 kic.go:430] container "old-k8s-version-729486" state is running.
	I0110 10:02:11.904900  501605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:02:11.932007  501605 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/config.json ...
	I0110 10:02:11.932247  501605 machine.go:94] provisionDockerMachine start ...
	I0110 10:02:11.932315  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:11.964813  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:11.965143  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:11.965159  501605 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:02:11.965801  501605 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:02:15.128574  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-729486
	
	I0110 10:02:15.128602  501605 ubuntu.go:182] provisioning hostname "old-k8s-version-729486"
	I0110 10:02:15.128740  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.147621  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:15.147936  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:15.147956  501605 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-729486 && echo "old-k8s-version-729486" | sudo tee /etc/hostname
	I0110 10:02:15.306065  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-729486
	
	I0110 10:02:15.306189  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.324176  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:15.324487  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:15.324579  501605 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-729486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-729486/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-729486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:02:15.476780  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:02:15.476809  501605 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:02:15.476832  501605 ubuntu.go:190] setting up certificates
	I0110 10:02:15.476842  501605 provision.go:84] configureAuth start
	I0110 10:02:15.476916  501605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:02:15.493414  501605 provision.go:143] copyHostCerts
	I0110 10:02:15.493499  501605 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:02:15.493524  501605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:02:15.493606  501605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:02:15.493716  501605 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:02:15.493727  501605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:02:15.493754  501605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:02:15.493860  501605 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:02:15.493871  501605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:02:15.493898  501605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:02:15.493950  501605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-729486 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-729486]
	I0110 10:02:15.710865  501605 provision.go:177] copyRemoteCerts
	I0110 10:02:15.710991  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:02:15.711076  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.729525  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:15.832977  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:02:15.851751  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 10:02:15.869401  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:02:15.887536  501605 provision.go:87] duration metric: took 410.663242ms to configureAuth
	I0110 10:02:15.887576  501605 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:02:15.887786  501605 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:02:15.887901  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:15.908720  501605 main.go:144] libmachine: Using SSH client type: native
	I0110 10:02:15.909031  501605 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33424 <nil> <nil>}
	I0110 10:02:15.909051  501605 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:02:16.266866  501605 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:02:16.266890  501605 machine.go:97] duration metric: took 4.334626795s to provisionDockerMachine
	I0110 10:02:16.266902  501605 start.go:293] postStartSetup for "old-k8s-version-729486" (driver="docker")
	I0110 10:02:16.266923  501605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:02:16.267000  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:02:16.267055  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.288867  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.392324  501605 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:02:16.395734  501605 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:02:16.395764  501605 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:02:16.395778  501605 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:02:16.395835  501605 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:02:16.395920  501605 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:02:16.396024  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:02:16.409018  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:02:16.430148  501605 start.go:296] duration metric: took 163.221695ms for postStartSetup
	I0110 10:02:16.430231  501605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:02:16.430293  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.451116  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.553803  501605 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:02:16.558626  501605 fix.go:56] duration metric: took 4.932028845s for fixHost
	I0110 10:02:16.558656  501605 start.go:83] releasing machines lock for "old-k8s-version-729486", held for 4.93209054s
	I0110 10:02:16.558727  501605 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-729486
	I0110 10:02:16.576575  501605 ssh_runner.go:195] Run: cat /version.json
	I0110 10:02:16.576590  501605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:02:16.576634  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.576657  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:16.593723  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.602125  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:16.794981  501605 ssh_runner.go:195] Run: systemctl --version
	I0110 10:02:16.801511  501605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:02:16.838059  501605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:02:16.842721  501605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:02:16.842823  501605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:02:16.850868  501605 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:02:16.850893  501605 start.go:496] detecting cgroup driver to use...
	I0110 10:02:16.850942  501605 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:02:16.850997  501605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:02:16.866457  501605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:02:16.879639  501605 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:02:16.879708  501605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:02:16.895691  501605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:02:16.908884  501605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:02:17.022230  501605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:02:17.156133  501605 docker.go:234] disabling docker service ...
	I0110 10:02:17.156203  501605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:02:17.172836  501605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:02:17.186287  501605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:02:17.307588  501605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:02:17.441504  501605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:02:17.454576  501605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:02:17.470483  501605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 10:02:17.470591  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.479584  501605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:02:17.479673  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.488948  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.498151  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.507740  501605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:02:17.516330  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.525912  501605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.534746  501605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:02:17.543671  501605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:02:17.551328  501605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:02:17.558665  501605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:02:17.679630  501605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:02:17.862865  501605 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:02:17.862937  501605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:02:17.866766  501605 start.go:574] Will wait 60s for crictl version
	I0110 10:02:17.866906  501605 ssh_runner.go:195] Run: which crictl
	I0110 10:02:17.870345  501605 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:02:17.895286  501605 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:02:17.895370  501605 ssh_runner.go:195] Run: crio --version
	I0110 10:02:17.930357  501605 ssh_runner.go:195] Run: crio --version
	I0110 10:02:17.968251  501605 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0110 10:02:17.970998  501605 cli_runner.go:164] Run: docker network inspect old-k8s-version-729486 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:02:17.987450  501605 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:02:17.991110  501605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:02:18.000740  501605 kubeadm.go:884] updating cluster {Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:02:18.000860  501605 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 10:02:18.000919  501605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:02:18.041557  501605 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:02:18.041578  501605 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:02:18.041640  501605 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:02:18.071063  501605 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:02:18.071127  501605 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:02:18.071151  501605 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I0110 10:02:18.071282  501605 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-729486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:02:18.071387  501605 ssh_runner.go:195] Run: crio config
	I0110 10:02:18.146892  501605 cni.go:84] Creating CNI manager for ""
	I0110 10:02:18.146925  501605 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:02:18.147007  501605 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:02:18.147065  501605 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-729486 NodeName:old-k8s-version-729486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:02:18.147299  501605 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-729486"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:02:18.147467  501605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0110 10:02:18.155939  501605 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:02:18.156037  501605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:02:18.163856  501605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0110 10:02:18.177243  501605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:02:18.190067  501605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0110 10:02:18.203425  501605 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:02:18.206991  501605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:02:18.216462  501605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:02:18.335653  501605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:02:18.354453  501605 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486 for IP: 192.168.76.2
	I0110 10:02:18.354530  501605 certs.go:195] generating shared ca certs ...
	I0110 10:02:18.354562  501605 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:18.354746  501605 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:02:18.354820  501605 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:02:18.354855  501605 certs.go:257] generating profile certs ...
	I0110 10:02:18.354965  501605 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.key
	I0110 10:02:18.355059  501605 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key.3e623c7c
	I0110 10:02:18.355137  501605 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key
	I0110 10:02:18.355274  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:02:18.355336  501605 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:02:18.355369  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:02:18.355424  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:02:18.355480  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:02:18.355528  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:02:18.355608  501605 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:02:18.356283  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:02:18.382398  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:02:18.400130  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:02:18.418777  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:02:18.442752  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0110 10:02:18.461446  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:02:18.482319  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:02:18.502712  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:02:18.526793  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:02:18.554314  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:02:18.577367  501605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:02:18.598933  501605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:02:18.612332  501605 ssh_runner.go:195] Run: openssl version
	I0110 10:02:18.618645  501605 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.626915  501605 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:02:18.638046  501605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.642525  501605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.642613  501605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:02:18.685587  501605 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:02:18.694364  501605 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.702896  501605 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:02:18.714785  501605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.718586  501605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.718697  501605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:02:18.759559  501605 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:02:18.766937  501605 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.774098  501605 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:02:18.781295  501605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.784812  501605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.784915  501605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:02:18.825991  501605 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:02:18.833229  501605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:02:18.836817  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:02:18.877482  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:02:18.918772  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:02:18.960022  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:02:19.019425  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:02:19.083096  501605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:02:19.160272  501605 kubeadm.go:401] StartCluster: {Name:old-k8s-version-729486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-729486 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:02:19.160379  501605 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:02:19.160488  501605 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:02:19.213940  501605 cri.go:96] found id: "5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676"
	I0110 10:02:19.213968  501605 cri.go:96] found id: "c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830"
	I0110 10:02:19.213978  501605 cri.go:96] found id: "4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac"
	I0110 10:02:19.214000  501605 cri.go:96] found id: "b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09"
	I0110 10:02:19.214013  501605 cri.go:96] found id: ""
	I0110 10:02:19.214083  501605 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:02:19.230742  501605 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:02:19Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:02:19.230848  501605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:02:19.241659  501605 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:02:19.241681  501605 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:02:19.241763  501605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:02:19.251215  501605 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:02:19.251684  501605 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-729486" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:02:19.251826  501605 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-729486" cluster setting kubeconfig missing "old-k8s-version-729486" context setting]
	I0110 10:02:19.252203  501605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:19.253851  501605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:02:19.265556  501605 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:02:19.265602  501605 kubeadm.go:602] duration metric: took 23.914802ms to restartPrimaryControlPlane
	I0110 10:02:19.265629  501605 kubeadm.go:403] duration metric: took 105.368044ms to StartCluster
	I0110 10:02:19.265653  501605 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:19.265751  501605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:02:19.266412  501605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:02:19.266662  501605 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:02:19.267048  501605 config.go:182] Loaded profile config "old-k8s-version-729486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 10:02:19.267118  501605 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:02:19.267264  501605 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-729486"
	I0110 10:02:19.267297  501605 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-729486"
	W0110 10:02:19.267309  501605 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:02:19.267334  501605 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:02:19.268199  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.268405  501605 addons.go:70] Setting dashboard=true in profile "old-k8s-version-729486"
	I0110 10:02:19.268425  501605 addons.go:239] Setting addon dashboard=true in "old-k8s-version-729486"
	W0110 10:02:19.268444  501605 addons.go:248] addon dashboard should already be in state true
	I0110 10:02:19.268481  501605 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:02:19.268833  501605 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-729486"
	I0110 10:02:19.268856  501605 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-729486"
	I0110 10:02:19.269128  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.269132  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.273134  501605 out.go:179] * Verifying Kubernetes components...
	I0110 10:02:19.276765  501605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:02:19.320318  501605 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:02:19.324548  501605 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:02:19.324570  501605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:02:19.324624  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:19.325256  501605 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-729486"
	W0110 10:02:19.325271  501605 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:02:19.325294  501605 host.go:66] Checking if "old-k8s-version-729486" exists ...
	I0110 10:02:19.325802  501605 cli_runner.go:164] Run: docker container inspect old-k8s-version-729486 --format={{.State.Status}}
	I0110 10:02:19.335474  501605 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:02:19.340668  501605 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:02:19.343781  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:02:19.343807  501605 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:02:19.343878  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:19.386026  501605 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:02:19.386046  501605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:02:19.386109  501605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-729486
	I0110 10:02:19.401926  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:19.436733  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:19.444823  501605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33424 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/old-k8s-version-729486/id_rsa Username:docker}
	I0110 10:02:19.645588  501605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:02:19.671085  501605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:02:19.758333  501605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:02:19.786840  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:02:19.786913  501605 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:02:19.852542  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:02:19.852616  501605 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:02:19.889470  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:02:19.889540  501605 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:02:19.942247  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:02:19.942315  501605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:02:20.002331  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:02:20.002411  501605 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:02:20.034514  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:02:20.034596  501605 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:02:20.069012  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:02:20.069091  501605 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:02:20.098046  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:02:20.098110  501605 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:02:20.117787  501605 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:02:20.117863  501605 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:02:20.146882  501605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:02:21.842360  490351 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 10:02:21.842394  490351 kubeadm.go:319] 
	I0110 10:02:21.842516  490351 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 10:02:21.848886  490351 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:02:21.849075  490351 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:02:21.849219  490351 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:02:21.852606  490351 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:02:21.852695  490351 kubeadm.go:319] OS: Linux
	I0110 10:02:21.852754  490351 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:02:21.852807  490351 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:02:21.852857  490351 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:02:21.852908  490351 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:02:21.852959  490351 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:02:21.853011  490351 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:02:21.853060  490351 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:02:21.853110  490351 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:02:21.853159  490351 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:02:21.853236  490351 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:02:21.853338  490351 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:02:21.853434  490351 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:02:21.853501  490351 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:02:21.856776  490351 out.go:252]   - Generating certificates and keys ...
	I0110 10:02:21.856870  490351 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:02:21.856939  490351 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:02:21.857011  490351 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:02:21.857072  490351 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:02:21.857137  490351 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:02:21.857190  490351 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:02:21.857247  490351 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:02:21.857386  490351 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:02:21.857442  490351 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:02:21.857577  490351 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:02:21.857647  490351 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:02:21.857714  490351 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:02:21.857762  490351 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:02:21.857821  490351 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:02:21.857876  490351 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:02:21.857941  490351 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:02:21.857999  490351 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:02:21.858066  490351 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:02:21.858125  490351 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:02:21.858212  490351 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:02:21.858282  490351 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:02:21.861182  490351 out.go:252]   - Booting up control plane ...
	I0110 10:02:21.861340  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:02:21.861473  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:02:21.861592  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:02:21.861750  490351 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:02:21.861895  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:02:21.862059  490351 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:02:21.862188  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:02:21.862264  490351 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:02:21.862452  490351 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:02:21.862605  490351 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:02:21.862748  490351 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001342949s
	I0110 10:02:21.862806  490351 kubeadm.go:319] 
	I0110 10:02:21.862878  490351 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 10:02:21.862914  490351 kubeadm.go:319] 	- The kubelet is not running
	I0110 10:02:21.863025  490351 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 10:02:21.863030  490351 kubeadm.go:319] 
	I0110 10:02:21.863142  490351 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 10:02:21.863176  490351 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 10:02:21.863208  490351 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 10:02:21.863213  490351 kubeadm.go:319] 
	W0110 10:02:21.863327  490351 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-524845 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001342949s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 10:02:21.863399  490351 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 10:02:22.326865  490351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:02:22.351207  490351 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:02:22.351267  490351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:02:22.364009  490351 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:02:22.364028  490351 kubeadm.go:158] found existing configuration files:
	
	I0110 10:02:22.364080  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:02:22.377483  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:02:22.377599  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:02:22.387938  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:02:22.399386  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:02:22.399500  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:02:22.407042  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:02:22.421172  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:02:22.421285  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:02:22.432909  490351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:02:22.444516  490351 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:02:22.444637  490351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:02:22.457744  490351 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:02:22.529233  490351 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:02:22.529354  490351 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:02:22.663263  490351 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:02:22.663411  490351 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:02:22.663492  490351 kubeadm.go:319] OS: Linux
	I0110 10:02:22.663563  490351 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:02:22.663670  490351 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:02:22.663761  490351 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:02:22.663841  490351 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:02:22.663923  490351 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:02:22.663995  490351 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:02:22.664045  490351 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:02:22.664096  490351 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:02:22.664152  490351 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:02:22.789192  490351 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:02:22.789359  490351 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:02:22.789481  490351 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:02:22.807386  490351 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:02:22.812689  490351 out.go:252]   - Generating certificates and keys ...
	I0110 10:02:22.812850  490351 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:02:22.812970  490351 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:02:22.813528  490351 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 10:02:22.814201  490351 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 10:02:22.814820  490351 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 10:02:22.821187  490351 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 10:02:22.822578  490351 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 10:02:22.832829  490351 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 10:02:22.832919  490351 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 10:02:22.832992  490351 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 10:02:22.833030  490351 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 10:02:22.833085  490351 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:02:23.192856  490351 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:02:23.508049  490351 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:02:23.719185  490351 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:02:24.061850  490351 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:02:24.248896  490351 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:02:24.248995  490351 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:02:24.249547  490351 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:02:26.351439  501605 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.680266304s)
	I0110 10:02:26.351499  501605 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-729486" to be "Ready" ...
	I0110 10:02:26.351802  501605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.593405165s)
	I0110 10:02:26.352887  501605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.707228235s)
	I0110 10:02:26.385846  501605 node_ready.go:49] node "old-k8s-version-729486" is "Ready"
	I0110 10:02:26.385881  501605 node_ready.go:38] duration metric: took 34.368098ms for node "old-k8s-version-729486" to be "Ready" ...
	I0110 10:02:26.385926  501605 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:02:26.386023  501605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:02:26.967653  501605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.82067345s)
	I0110 10:02:26.967852  501605 api_server.go:72] duration metric: took 7.701145531s to wait for apiserver process to appear ...
	I0110 10:02:26.967903  501605 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:02:26.967938  501605 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:02:26.970869  501605 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-729486 addons enable metrics-server
	
	I0110 10:02:26.973776  501605 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0110 10:02:24.252594  490351 out.go:252]   - Booting up control plane ...
	I0110 10:02:24.252697  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:02:24.252776  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:02:24.253741  490351 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:02:24.281294  490351 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:02:24.281402  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:02:24.289897  490351 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:02:24.289997  490351 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:02:24.290037  490351 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:02:24.511360  490351 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:02:24.511481  490351 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:02:26.976834  501605 addons.go:530] duration metric: took 7.709707947s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0110 10:02:26.980881  501605 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:02:26.982673  501605 api_server.go:141] control plane version: v1.28.0
	I0110 10:02:26.982707  501605 api_server.go:131] duration metric: took 14.791189ms to wait for apiserver health ...
	I0110 10:02:26.982725  501605 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:02:26.988011  501605 system_pods.go:59] 8 kube-system pods found
	I0110 10:02:26.988059  501605 system_pods.go:61] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:02:26.988069  501605 system_pods.go:61] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:02:26.988076  501605 system_pods.go:61] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:02:26.988083  501605 system_pods.go:61] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:02:26.988091  501605 system_pods.go:61] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:02:26.988207  501605 system_pods.go:61] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:02:26.988227  501605 system_pods.go:61] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:02:26.988232  501605 system_pods.go:61] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Running
	I0110 10:02:26.988245  501605 system_pods.go:74] duration metric: took 5.496539ms to wait for pod list to return data ...
	I0110 10:02:26.988258  501605 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:02:26.995074  501605 default_sa.go:45] found service account: "default"
	I0110 10:02:26.995102  501605 default_sa.go:55] duration metric: took 6.829175ms for default service account to be created ...
	I0110 10:02:26.995117  501605 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:02:27.004027  501605 system_pods.go:86] 8 kube-system pods found
	I0110 10:02:27.004147  501605 system_pods.go:89] "coredns-5dd5756b68-xsgtg" [c3718681-9e27-4160-b9fa-8462b5c71a26] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:02:27.004195  501605 system_pods.go:89] "etcd-old-k8s-version-729486" [76c695b2-b8aa-4ff0-ba29-32d4d846f6d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:02:27.004222  501605 system_pods.go:89] "kindnet-mcvws" [9a148c52-2e43-474d-accb-ff93db5e4756] Running
	I0110 10:02:27.004249  501605 system_pods.go:89] "kube-apiserver-old-k8s-version-729486" [ca0696bd-6f69-4f84-88e3-c1e430041c0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:02:27.004275  501605 system_pods.go:89] "kube-controller-manager-old-k8s-version-729486" [87cb675c-5667-4343-95c4-37ea7b51b941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:02:27.004310  501605 system_pods.go:89] "kube-proxy-szwsd" [550b3042-ef9d-4e44-978b-f18534dc02bb] Running
	I0110 10:02:27.004342  501605 system_pods.go:89] "kube-scheduler-old-k8s-version-729486" [35c66509-77a2-4846-b919-14c61b09566f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:02:27.004365  501605 system_pods.go:89] "storage-provisioner" [016f019c-d231-41db-b408-7bc9e1fb613e] Running
	I0110 10:02:27.004390  501605 system_pods.go:126] duration metric: took 9.266417ms to wait for k8s-apps to be running ...
	I0110 10:02:27.004423  501605 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:02:27.004564  501605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:02:27.026886  501605 system_svc.go:56] duration metric: took 22.449725ms WaitForService to wait for kubelet
	I0110 10:02:27.026941  501605 kubeadm.go:587] duration metric: took 7.760238337s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:02:27.026985  501605 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:02:27.032023  501605 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:02:27.032100  501605 node_conditions.go:123] node cpu capacity is 2
	I0110 10:02:27.032170  501605 node_conditions.go:105] duration metric: took 5.171619ms to run NodePressure ...
	I0110 10:02:27.032198  501605 start.go:242] waiting for startup goroutines ...
	I0110 10:02:27.032219  501605 start.go:247] waiting for cluster config update ...
	I0110 10:02:27.032257  501605 start.go:256] writing updated cluster config ...
	I0110 10:02:27.032619  501605 ssh_runner.go:195] Run: rm -f paused
	I0110 10:02:27.036704  501605 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:02:27.042248  501605 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xsgtg" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 10:02:29.048416  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:31.048784  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:33.048943  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:35.547867  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:37.548770  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:40.055674  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:42.548625  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:44.548816  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:47.049850  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:49.547825  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:51.548570  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:54.048054  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:56.048797  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:02:58.548783  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:03:01.047979  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	W0110 10:03:03.048462  501605 pod_ready.go:104] pod "coredns-5dd5756b68-xsgtg" is not "Ready", error: <nil>
	I0110 10:03:05.048305  501605 pod_ready.go:94] pod "coredns-5dd5756b68-xsgtg" is "Ready"
	I0110 10:03:05.048335  501605 pod_ready.go:86] duration metric: took 38.006059001s for pod "coredns-5dd5756b68-xsgtg" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.051700  501605 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.056664  501605 pod_ready.go:94] pod "etcd-old-k8s-version-729486" is "Ready"
	I0110 10:03:05.056693  501605 pod_ready.go:86] duration metric: took 4.964814ms for pod "etcd-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.059428  501605 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.064145  501605 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-729486" is "Ready"
	I0110 10:03:05.064188  501605 pod_ready.go:86] duration metric: took 4.734601ms for pod "kube-apiserver-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.067016  501605 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.246730  501605 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-729486" is "Ready"
	I0110 10:03:05.246761  501605 pod_ready.go:86] duration metric: took 179.721102ms for pod "kube-controller-manager-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.446457  501605 pod_ready.go:83] waiting for pod "kube-proxy-szwsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:05.845672  501605 pod_ready.go:94] pod "kube-proxy-szwsd" is "Ready"
	I0110 10:03:05.845703  501605 pod_ready.go:86] duration metric: took 399.21824ms for pod "kube-proxy-szwsd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:06.046721  501605 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:06.446662  501605 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-729486" is "Ready"
	I0110 10:03:06.446707  501605 pod_ready.go:86] duration metric: took 399.95969ms for pod "kube-scheduler-old-k8s-version-729486" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:03:06.446721  501605 pod_ready.go:40] duration metric: took 39.409936342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:03:06.502367  501605 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0110 10:03:06.505711  501605 out.go:203] 
	W0110 10:03:06.508539  501605 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 10:03:06.511688  501605 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:03:06.514628  501605 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-729486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:02:56 old-k8s-version-729486 crio[662]: time="2026-01-10T10:02:56.760652336Z" level=info msg="Started container" PID=1680 containerID=18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8 description=kube-system/storage-provisioner/storage-provisioner id=7be32f43-fe72-4495-9195-9ed3fabc64aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ddf837f90f1058dff48c85e749c3d14e6092170e419892224abcebc6549bf3c
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.535294409Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8a505654-bb65-448b-aba7-f8fdba8bb09e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.536549236Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=711509c5-c038-4f53-b8d4-988584943524 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.537645184Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper" id=7aa81463-6729-4593-8d26-9f2065c0dce3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.537782318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.551420541Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.552112578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.566644285Z" level=info msg="Created container 2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper" id=7aa81463-6729-4593-8d26-9f2065c0dce3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.567438338Z" level=info msg="Starting container: 2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337" id=19ed699c-9fbc-41ca-9135-664c00f37a53 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.569451254Z" level=info msg="Started container" PID=1692 containerID=2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper id=19ed699c-9fbc-41ca-9135-664c00f37a53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101
	Jan 10 10:03:01 old-k8s-version-729486 conmon[1690]: conmon 2e580756364032f5d0f9 <ninfo>: container 1692 exited with status 1
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.744188769Z" level=info msg="Removing container: f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0" id=5ecc2b79-91e5-4aef-b3d6-7ecfbf74655f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.756298652Z" level=info msg="Error loading conmon cgroup of container f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0: cgroup deleted" id=5ecc2b79-91e5-4aef-b3d6-7ecfbf74655f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:03:01 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:01.762167019Z" level=info msg="Removed container f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445/dashboard-metrics-scraper" id=5ecc2b79-91e5-4aef-b3d6-7ecfbf74655f name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.436877612Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.436914445Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.441402889Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.441438401Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.454532468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.454789653Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.460856283Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.46089429Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.460927267Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.467275623Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:03:06 old-k8s-version-729486 crio[662]: time="2026-01-10T10:03:06.467309904Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	2e58075636403       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   b0528f76aab8d       dashboard-metrics-scraper-5f989dc9cf-jt445       kubernetes-dashboard
	18fec54aabaa1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   8ddf837f90f10       storage-provisioner                              kube-system
	44b2462370e72       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   58ddd93997c99       kubernetes-dashboard-8694d4445c-c5xh5            kubernetes-dashboard
	1fa19c6840d4b       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   49a998d6d8477       busybox                                          default
	b5741a9b10c5f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   5da831b10b8ca       coredns-5dd5756b68-xsgtg                         kube-system
	cfe9bbe8014e3       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           57 seconds ago       Running             kindnet-cni                 1                   6e000f0412a81       kindnet-mcvws                                    kube-system
	e840a9a6d843f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   9f2b68caac326       kube-proxy-szwsd                                 kube-system
	113f2c97bb2d9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   8ddf837f90f10       storage-provisioner                              kube-system
	5cc3bd4bc4c1f       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   cdd626251d4b2       kube-controller-manager-old-k8s-version-729486   kube-system
	c0a4eb50e2c15       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   83350cee52e51       etcd-old-k8s-version-729486                      kube-system
	4129c584728a1       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   31b8f9f305bfd       kube-apiserver-old-k8s-version-729486            kube-system
	b8d4be0f660bd       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   14a80284559eb       kube-scheduler-old-k8s-version-729486            kube-system
	
	
	==> coredns [b5741a9b10c5f413ff081cf53038322a13bc68558e9bcb48ec9f693161763914] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53525 - 20486 "HINFO IN 4725109551558841005.285351232193025376. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.035321013s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-729486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-729486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=old-k8s-version-729486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_01_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:01:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-729486
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:03:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:02:55 +0000   Sat, 10 Jan 2026 10:01:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-729486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                6835df29-9649-49d8-a5dc-2264bb66093f
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-xsgtg                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-729486                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-mcvws                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-729486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-729486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-szwsd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-729486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jt445        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c5xh5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-729486 event: Registered Node old-k8s-version-729486 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-729486 status is now: NodeReady
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node old-k8s-version-729486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node old-k8s-version-729486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-729486 event: Registered Node old-k8s-version-729486 in Controller
	
	
	==> dmesg <==
	[Jan10 09:29] overlayfs: idmapped layers are currently not supported
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0a4eb50e2c15f0c909a14942c5e6e51335dfc5f1b4c205776a384e82feb0830] <==
	{"level":"info","ts":"2026-01-10T10:02:19.637407Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:02:19.637512Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:02:19.637912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:02:19.640925Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2026-01-10T10:02:19.641118Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:02:19.641658Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T10:02:19.681942Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T10:02:19.68228Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T10:02:19.682347Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T10:02:19.68243Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:02:19.684532Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:02:21.024576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:02:21.024684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:02:21.02474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:02:21.024788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.024819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.024852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.024885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:02:21.031065Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-729486 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:02:21.031167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:02:21.032297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:02:21.031225Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:02:21.037565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:02:21.04854Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:02:21.052532Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:03:23 up  2:45,  0 user,  load average: 1.28, 1.46, 1.92
	Linux old-k8s-version-729486 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cfe9bbe8014e3d63ffcc2a2b208a8181dc00308bdee332d52426fe84c746f58c] <==
	I0110 10:02:26.251986       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:02:26.317032       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:02:26.317172       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:02:26.317192       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:02:26.317204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:02:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:02:26.428814       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:02:26.428832       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:02:26.428850       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:02:26.429654       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:02:56.429713       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:02:56.429717       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 10:02:56.429813       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:02:56.429863       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 10:02:58.029237       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:02:58.029268       1 metrics.go:72] Registering metrics
	I0110 10:02:58.029348       1 controller.go:711] "Syncing nftables rules"
	I0110 10:03:06.429136       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:03:06.429205       1 main.go:301] handling current node
	I0110 10:03:16.433534       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:03:16.433570       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4129c584728a1d9d005e5900b1d29bb8d94b5826d72dd240b3b77773e40abcac] <==
	I0110 10:02:24.881191       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 10:02:24.883692       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 10:02:24.883810       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 10:02:24.883825       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 10:02:24.883942       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 10:02:24.891221       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 10:02:24.900629       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 10:02:24.904677       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:02:24.912781       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 10:02:24.930782       1 aggregator.go:166] initial CRD sync complete...
	I0110 10:02:24.930819       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 10:02:24.930827       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:02:24.930837       1 cache.go:39] Caches are synced for autoregister controller
	E0110 10:02:25.001317       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 10:02:25.511803       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 10:02:26.727002       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 10:02:26.776340       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 10:02:26.810481       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:02:26.831416       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:02:26.844909       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 10:02:26.912625       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.64.223"}
	I0110 10:02:26.953727       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.130.167"}
	I0110 10:02:37.335101       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 10:02:37.734463       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 10:02:37.870661       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5cc3bd4bc4c1fca307ced2a934a7aef674e63f5f91fcd54697c1c0e8a7e5e676] <==
	I0110 10:02:37.744265       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I0110 10:02:37.829853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="497.615969ms"
	I0110 10:02:37.831047       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-jt445"
	I0110 10:02:37.831417       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-c5xh5"
	I0110 10:02:37.831363       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.806µs"
	I0110 10:02:37.860295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="115.847053ms"
	I0110 10:02:37.864991       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="124.159004ms"
	I0110 10:02:37.888362       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 10:02:37.892907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="32.553288ms"
	I0110 10:02:37.893059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.025008ms"
	I0110 10:02:37.895385       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 10:02:37.895475       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 10:02:37.895767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.239µs"
	I0110 10:02:37.895869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.73µs"
	I0110 10:02:37.896538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.148µs"
	I0110 10:02:37.910901       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.428µs"
	I0110 10:02:42.702766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.979µs"
	I0110 10:02:43.710280       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.774µs"
	I0110 10:02:44.715867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.169µs"
	I0110 10:02:47.739933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.186284ms"
	I0110 10:02:47.740480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="76.613µs"
	I0110 10:03:01.759117       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.075µs"
	I0110 10:03:04.733250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.967432ms"
	I0110 10:03:04.734353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.41µs"
	I0110 10:03:08.166967       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.153µs"
	
	
	==> kube-proxy [e840a9a6d843f4f94134f005d142bc77765ec34f5d780777c800b3831d78be18] <==
	I0110 10:02:26.088675       1 server_others.go:69] "Using iptables proxy"
	I0110 10:02:26.114910       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0110 10:02:26.277698       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:02:26.281481       1 server_others.go:152] "Using iptables Proxier"
	I0110 10:02:26.281519       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 10:02:26.281527       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 10:02:26.281552       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 10:02:26.281900       1 server.go:846] "Version info" version="v1.28.0"
	I0110 10:02:26.281911       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:02:26.282565       1 config.go:188] "Starting service config controller"
	I0110 10:02:26.282588       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 10:02:26.282604       1 config.go:97] "Starting endpoint slice config controller"
	I0110 10:02:26.282607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 10:02:26.283050       1 config.go:315] "Starting node config controller"
	I0110 10:02:26.283056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 10:02:26.382892       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 10:02:26.382967       1 shared_informer.go:318] Caches are synced for service config
	I0110 10:02:26.383106       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8d4be0f660bd2d5bf4c919b8f3ef7f06479e1cc6044562ee85d22b026733d09] <==
	I0110 10:02:22.646836       1 serving.go:348] Generated self-signed cert in-memory
	I0110 10:02:25.017036       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0110 10:02:25.017276       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:02:25.025853       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0110 10:02:25.028615       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0110 10:02:25.028711       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0110 10:02:25.028768       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:02:25.028800       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 10:02:25.028837       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0110 10:02:25.028865       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 10:02:25.028713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0110 10:02:25.129601       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 10:02:25.129678       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0110 10:02:25.129769       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 10:02:37 old-k8s-version-729486 kubelet[789]: I0110 10:02:37.881715     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fggl\" (UniqueName: \"kubernetes.io/projected/131040d6-af8c-40cf-8970-f218be5ab7fc-kube-api-access-5fggl\") pod \"kubernetes-dashboard-8694d4445c-c5xh5\" (UID: \"131040d6-af8c-40cf-8970-f218be5ab7fc\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5xh5"
	Jan 10 10:02:37 old-k8s-version-729486 kubelet[789]: I0110 10:02:37.881850     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tq6\" (UniqueName: \"kubernetes.io/projected/35fa1614-40e8-49f6-b3e3-7176013da408-kube-api-access-l2tq6\") pod \"dashboard-metrics-scraper-5f989dc9cf-jt445\" (UID: \"35fa1614-40e8-49f6-b3e3-7176013da408\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445"
	Jan 10 10:02:37 old-k8s-version-729486 kubelet[789]: I0110 10:02:37.881959     789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/35fa1614-40e8-49f6-b3e3-7176013da408-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-jt445\" (UID: \"35fa1614-40e8-49f6-b3e3-7176013da408\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445"
	Jan 10 10:02:38 old-k8s-version-729486 kubelet[789]: W0110 10:02:38.209098     789 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101 WatchSource:0}: Error finding container b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101: Status 404 returned error can't find the container with id b0528f76aab8da2916f71090a8f4752db1594f30164e9d3cefab5a9052158101
	Jan 10 10:02:38 old-k8s-version-729486 kubelet[789]: W0110 10:02:38.233018     789 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e3db4a48fc4a3d8a697f8466bfd3d5e0cc433a3939cdcd3ce69d7f380f4ef28a/crio-58ddd93997c99a933aee967f4b72e8e74396e3a22114bb6de24cae6b34e9f2eb WatchSource:0}: Error finding container 58ddd93997c99a933aee967f4b72e8e74396e3a22114bb6de24cae6b34e9f2eb: Status 404 returned error can't find the container with id 58ddd93997c99a933aee967f4b72e8e74396e3a22114bb6de24cae6b34e9f2eb
	Jan 10 10:02:42 old-k8s-version-729486 kubelet[789]: I0110 10:02:42.686609     789 scope.go:117] "RemoveContainer" containerID="45dad10ccbe6bce30734e6480e83839f6b256d4d25b8d9e4e92228440bb45f5a"
	Jan 10 10:02:43 old-k8s-version-729486 kubelet[789]: I0110 10:02:43.690376     789 scope.go:117] "RemoveContainer" containerID="45dad10ccbe6bce30734e6480e83839f6b256d4d25b8d9e4e92228440bb45f5a"
	Jan 10 10:02:43 old-k8s-version-729486 kubelet[789]: I0110 10:02:43.690680     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:02:43 old-k8s-version-729486 kubelet[789]: E0110 10:02:43.690973     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:02:44 old-k8s-version-729486 kubelet[789]: I0110 10:02:44.694378     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:02:44 old-k8s-version-729486 kubelet[789]: E0110 10:02:44.694646     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:02:48 old-k8s-version-729486 kubelet[789]: I0110 10:02:48.147853     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:02:48 old-k8s-version-729486 kubelet[789]: E0110 10:02:48.148209     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:02:56 old-k8s-version-729486 kubelet[789]: I0110 10:02:56.726918     789 scope.go:117] "RemoveContainer" containerID="113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2"
	Jan 10 10:02:56 old-k8s-version-729486 kubelet[789]: I0110 10:02:56.750297     789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c5xh5" podStartSLOduration=10.500591096 podCreationTimestamp="2026-01-10 10:02:37 +0000 UTC" firstStartedPulling="2026-01-10 10:02:38.237217091 +0000 UTC m=+19.880407246" lastFinishedPulling="2026-01-10 10:02:47.486862385 +0000 UTC m=+29.130052540" observedRunningTime="2026-01-10 10:02:47.723315165 +0000 UTC m=+29.366505320" watchObservedRunningTime="2026-01-10 10:02:56.75023639 +0000 UTC m=+38.393426537"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: I0110 10:03:01.534638     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: I0110 10:03:01.742037     789 scope.go:117] "RemoveContainer" containerID="f69dc2213c1998533c2cde4d8fbf907162e4fc37f4968c9befe8c1746713cdd0"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: I0110 10:03:01.742254     789 scope.go:117] "RemoveContainer" containerID="2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	Jan 10 10:03:01 old-k8s-version-729486 kubelet[789]: E0110 10:03:01.742575     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:03:08 old-k8s-version-729486 kubelet[789]: I0110 10:03:08.148085     789 scope.go:117] "RemoveContainer" containerID="2e580756364032f5d0f9bca53c7d04f25d6035560e11d3ddf905ced6fceeb337"
	Jan 10 10:03:08 old-k8s-version-729486 kubelet[789]: E0110 10:03:08.148395     789 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-jt445_kubernetes-dashboard(35fa1614-40e8-49f6-b3e3-7176013da408)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jt445" podUID="35fa1614-40e8-49f6-b3e3-7176013da408"
	Jan 10 10:03:18 old-k8s-version-729486 kubelet[789]: I0110 10:03:18.832181     789 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 10:03:18 old-k8s-version-729486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:03:18 old-k8s-version-729486 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:03:18 old-k8s-version-729486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [44b2462370e7204654417d02b3c6a94563343ab46fe0617bebb08e76506c8f1b] <==
	2026/01/10 10:02:47 Using namespace: kubernetes-dashboard
	2026/01/10 10:02:47 Using in-cluster config to connect to apiserver
	2026/01/10 10:02:47 Using secret token for csrf signing
	2026/01/10 10:02:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:02:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:02:47 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 10:02:47 Generating JWE encryption key
	2026/01/10 10:02:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:02:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:02:47 Initializing JWE encryption key from synchronized object
	2026/01/10 10:02:47 Creating in-cluster Sidecar client
	2026/01/10 10:02:48 Serving insecurely on HTTP port: 9090
	2026/01/10 10:02:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:03:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:02:47 Starting overwatch
	
	
	==> storage-provisioner [113f2c97bb2d9820a9ff596f3fde5fccae866c32a36827c6e86be9c58fdc01f2] <==
	I0110 10:02:26.136870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:02:56.140293       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [18fec54aabaa17d53d921341aeb10a80766bce2af0d5fb40f462662b29ee03f8] <==
	I0110 10:02:56.771657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:02:56.786168       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:02:56.786286       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 10:03:14.188781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:03:14.189022       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-729486_642fc9f3-7553-4a47-996a-d0963eb16563!
	I0110 10:03:14.190456       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1b364ec7-3081-49c8-b8f1-66ca586b914b", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-729486_642fc9f3-7553-4a47-996a-d0963eb16563 became leader
	I0110 10:03:14.291540       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-729486_642fc9f3-7553-4a47-996a-d0963eb16563!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-729486 -n old-k8s-version-729486
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-729486 -n old-k8s-version-729486: exit status 2 (362.17168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-729486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.478079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:04:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-964204 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-964204 describe deploy/metrics-server -n kube-system: exit status 1 (82.280815ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-964204 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-964204
helpers_test.go:244: (dbg) docker inspect no-preload-964204:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98",
	        "Created": "2026-01-10T10:03:28.469288354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 506161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:03:28.544830476Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/hosts",
	        "LogPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98-json.log",
	        "Name": "/no-preload-964204",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-964204:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-964204",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98",
	                "LowerDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-964204",
	                "Source": "/var/lib/docker/volumes/no-preload-964204/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-964204",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-964204",
	                "name.minikube.sigs.k8s.io": "no-preload-964204",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c07b0ab10ee1cf85bdf91bb704f1fcd0604f2603603741bf2e2b8530de7d769a",
	            "SandboxKey": "/var/run/docker/netns/c07b0ab10ee1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-964204": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:08:1d:eb:e1:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23c88132d52b29689462f98c2dbfa4655b3eded5f2a83bfc6642616f52ac86e6",
	                    "EndpointID": "f5b83551adc21bf2dc436a8cd538f15a22feb10f4599b7315198838d52d54aba",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-964204",
	                        "d5228a313f58"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-964204 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-964204 logs -n 25: (1.19092778s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-255897 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ ssh     │ -p cilium-255897 sudo crio config                                                                                                                                                                                                             │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │                     │
	│ delete  │ -p cilium-255897                                                                                                                                                                                                                              │ cilium-255897             │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:03:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:03:27.496490  505844 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:03:27.496694  505844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:03:27.496706  505844 out.go:374] Setting ErrFile to fd 2...
	I0110 10:03:27.496712  505844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:03:27.496950  505844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:03:27.497384  505844 out.go:368] Setting JSON to false
	I0110 10:03:27.498202  505844 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9957,"bootTime":1768029451,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:03:27.498270  505844 start.go:143] virtualization:  
	I0110 10:03:27.499961  505844 out.go:179] * [no-preload-964204] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:03:27.501375  505844 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:03:27.501478  505844 notify.go:221] Checking for updates...
	I0110 10:03:27.503736  505844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:03:27.504889  505844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:03:27.506225  505844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:03:27.507280  505844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:03:27.508451  505844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:03:27.510178  505844 config.go:182] Loaded profile config "force-systemd-flag-524845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:03:27.510314  505844 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:03:27.531857  505844 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:03:27.531998  505844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:03:27.600579  505844 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:03:27.590740752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:03:27.600691  505844 docker.go:319] overlay module found
	I0110 10:03:27.602260  505844 out.go:179] * Using the docker driver based on user configuration
	I0110 10:03:27.603383  505844 start.go:309] selected driver: docker
	I0110 10:03:27.603397  505844 start.go:928] validating driver "docker" against <nil>
	I0110 10:03:27.603410  505844 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:03:27.604124  505844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:03:27.659795  505844 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:03:27.645638293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:03:27.659950  505844 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:03:27.660177  505844 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:03:27.662937  505844 out.go:179] * Using Docker driver with root privileges
	I0110 10:03:27.664338  505844 cni.go:84] Creating CNI manager for ""
	I0110 10:03:27.664408  505844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:03:27.664422  505844 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:03:27.664567  505844 start.go:353] cluster config:
	{Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:03:27.665983  505844 out.go:179] * Starting "no-preload-964204" primary control-plane node in "no-preload-964204" cluster
	I0110 10:03:27.667088  505844 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:03:27.668310  505844 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:03:27.669535  505844 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:03:27.669553  505844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:03:27.669648  505844 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json ...
	I0110 10:03:27.669676  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json: {Name:mkbac770da3efacc626bda50614cc574f377e7a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:27.669835  505844 cache.go:107] acquiring lock: {Name:mkaf98767e2a7d58e08cc2ca469eac45d26ab17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.669910  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 10:03:27.669925  505844 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.242µs
	I0110 10:03:27.669942  505844 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 10:03:27.669953  505844 cache.go:107] acquiring lock: {Name:mk20f45a028e063162f8cd4bcc9049083b517dce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.669988  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 10:03:27.669998  505844 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 46.015µs
	I0110 10:03:27.670005  505844 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 10:03:27.670020  505844 cache.go:107] acquiring lock: {Name:mk49f61dae811454fbbf5c86caa9b028b9c6fc70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.670052  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 10:03:27.670062  505844 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 43.102µs
	I0110 10:03:27.670068  505844 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 10:03:27.670083  505844 cache.go:107] acquiring lock: {Name:mk1d8ad3a0da43b5820d3ac9775158ff65f73409 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.670113  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 10:03:27.670121  505844 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 39.82µs
	I0110 10:03:27.670132  505844 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 10:03:27.670145  505844 cache.go:107] acquiring lock: {Name:mk025301e6f5fb7d9efce7266c9392491c803686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.670174  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 10:03:27.670188  505844 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 43.866µs
	I0110 10:03:27.670196  505844 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 10:03:27.670205  505844 cache.go:107] acquiring lock: {Name:mke106dc55e7252772391fff3ed3fce4c597722f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.670236  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0110 10:03:27.670244  505844 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.477µs
	I0110 10:03:27.670250  505844 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 10:03:27.670265  505844 cache.go:107] acquiring lock: {Name:mk27d75a0d283ab8c320b03d40025ce2f8416bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.670296  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 10:03:27.670305  505844 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 41.453µs
	I0110 10:03:27.670329  505844 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 10:03:27.670337  505844 cache.go:107] acquiring lock: {Name:mk5e0c44af9753c2eb4284091ed19ea2384d8759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.670368  505844 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 10:03:27.670377  505844 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 40.501µs
	I0110 10:03:27.670383  505844 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 10:03:27.670391  505844 cache.go:87] Successfully saved all images to host disk.
	I0110 10:03:27.695071  505844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:03:27.695094  505844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:03:27.695109  505844 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:03:27.695140  505844 start.go:360] acquireMachinesLock for no-preload-964204: {Name:mk30268180d89419a4155580e5db2de74dfb3aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:03:27.695245  505844 start.go:364] duration metric: took 85.933µs to acquireMachinesLock for "no-preload-964204"
	I0110 10:03:27.695275  505844 start.go:93] Provisioning new machine with config: &{Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:03:27.695344  505844 start.go:125] createHost starting for "" (driver="docker")
	I0110 10:03:27.696989  505844 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:03:27.697209  505844 start.go:159] libmachine.API.Create for "no-preload-964204" (driver="docker")
	I0110 10:03:27.697240  505844 client.go:173] LocalClient.Create starting
	I0110 10:03:27.697305  505844 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:03:27.697339  505844 main.go:144] libmachine: Decoding PEM data...
	I0110 10:03:27.697358  505844 main.go:144] libmachine: Parsing certificate...
	I0110 10:03:27.697409  505844 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:03:27.697432  505844 main.go:144] libmachine: Decoding PEM data...
	I0110 10:03:27.697446  505844 main.go:144] libmachine: Parsing certificate...
	I0110 10:03:27.697788  505844 cli_runner.go:164] Run: docker network inspect no-preload-964204 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:03:27.718220  505844 cli_runner.go:211] docker network inspect no-preload-964204 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:03:27.718297  505844 network_create.go:284] running [docker network inspect no-preload-964204] to gather additional debugging logs...
	I0110 10:03:27.718317  505844 cli_runner.go:164] Run: docker network inspect no-preload-964204
	W0110 10:03:27.733884  505844 cli_runner.go:211] docker network inspect no-preload-964204 returned with exit code 1
	I0110 10:03:27.733914  505844 network_create.go:287] error running [docker network inspect no-preload-964204]: docker network inspect no-preload-964204: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-964204 not found
	I0110 10:03:27.733926  505844 network_create.go:289] output of [docker network inspect no-preload-964204]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-964204 not found
	
	** /stderr **
	I0110 10:03:27.734015  505844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:03:27.750385  505844 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:03:27.750759  505844 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:03:27.750982  505844 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:03:27.751438  505844 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3a910}
	I0110 10:03:27.751456  505844 network_create.go:124] attempt to create docker network no-preload-964204 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 10:03:27.751512  505844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-964204 no-preload-964204
	I0110 10:03:27.810824  505844 network_create.go:108] docker network no-preload-964204 192.168.76.0/24 created
	I0110 10:03:27.810861  505844 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-964204" container
	I0110 10:03:27.810934  505844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:03:27.827216  505844 cli_runner.go:164] Run: docker volume create no-preload-964204 --label name.minikube.sigs.k8s.io=no-preload-964204 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:03:27.844784  505844 oci.go:103] Successfully created a docker volume no-preload-964204
	I0110 10:03:27.844872  505844 cli_runner.go:164] Run: docker run --rm --name no-preload-964204-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-964204 --entrypoint /usr/bin/test -v no-preload-964204:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:03:28.377507  505844 oci.go:107] Successfully prepared a docker volume no-preload-964204
	I0110 10:03:28.377581  505844 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W0110 10:03:28.377717  505844 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:03:28.377838  505844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:03:28.450862  505844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-964204 --name no-preload-964204 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-964204 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-964204 --network no-preload-964204 --ip 192.168.76.2 --volume no-preload-964204:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 10:03:28.770953  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Running}}
	I0110 10:03:28.791549  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:03:28.815510  505844 cli_runner.go:164] Run: docker exec no-preload-964204 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:03:28.870286  505844 oci.go:144] the created container "no-preload-964204" has a running status.
	I0110 10:03:28.870314  505844 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa...
	I0110 10:03:29.223369  505844 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:03:29.244349  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:03:29.267954  505844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:03:29.267973  505844 kic_runner.go:114] Args: [docker exec --privileged no-preload-964204 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:03:29.344642  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:03:29.362078  505844 machine.go:94] provisionDockerMachine start ...
	I0110 10:03:29.362170  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:29.380457  505844 main.go:144] libmachine: Using SSH client type: native
	I0110 10:03:29.380823  505844 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I0110 10:03:29.380845  505844 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:03:29.381582  505844 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:03:32.528200  505844 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-964204
	
	I0110 10:03:32.528227  505844 ubuntu.go:182] provisioning hostname "no-preload-964204"
	I0110 10:03:32.528295  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:32.546068  505844 main.go:144] libmachine: Using SSH client type: native
	I0110 10:03:32.546376  505844 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I0110 10:03:32.546388  505844 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-964204 && echo "no-preload-964204" | sudo tee /etc/hostname
	I0110 10:03:32.705757  505844 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-964204
	
	I0110 10:03:32.705834  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:32.724298  505844 main.go:144] libmachine: Using SSH client type: native
	I0110 10:03:32.724636  505844 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I0110 10:03:32.724660  505844 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-964204' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-964204/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-964204' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:03:32.872631  505844 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:03:32.872668  505844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:03:32.872687  505844 ubuntu.go:190] setting up certificates
	I0110 10:03:32.872700  505844 provision.go:84] configureAuth start
	I0110 10:03:32.872759  505844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:03:32.889816  505844 provision.go:143] copyHostCerts
	I0110 10:03:32.889884  505844 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:03:32.889898  505844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:03:32.889978  505844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:03:32.890085  505844 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:03:32.890097  505844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:03:32.890126  505844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:03:32.890194  505844 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:03:32.890204  505844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:03:32.890233  505844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:03:32.890293  505844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.no-preload-964204 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-964204]
	I0110 10:03:33.278960  505844 provision.go:177] copyRemoteCerts
	I0110 10:03:33.279035  505844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:03:33.279075  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:33.298422  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:03:33.404162  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:03:33.421455  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:03:33.439014  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:03:33.455822  505844 provision.go:87] duration metric: took 583.099551ms to configureAuth
	I0110 10:03:33.455850  505844 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:03:33.456032  505844 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:03:33.456150  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:33.473545  505844 main.go:144] libmachine: Using SSH client type: native
	I0110 10:03:33.473853  505844 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33429 <nil> <nil>}
	I0110 10:03:33.473873  505844 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:03:33.779578  505844 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:03:33.779604  505844 machine.go:97] duration metric: took 4.41750206s to provisionDockerMachine
	I0110 10:03:33.779615  505844 client.go:176] duration metric: took 6.082363826s to LocalClient.Create
	I0110 10:03:33.779628  505844 start.go:167] duration metric: took 6.082423995s to libmachine.API.Create "no-preload-964204"
	I0110 10:03:33.779636  505844 start.go:293] postStartSetup for "no-preload-964204" (driver="docker")
	I0110 10:03:33.779647  505844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:03:33.779713  505844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:03:33.779756  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:33.796768  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:03:33.900384  505844 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:03:33.903731  505844 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:03:33.903766  505844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:03:33.903778  505844 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:03:33.903835  505844 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:03:33.903928  505844 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:03:33.904034  505844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:03:33.911336  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:03:33.931080  505844 start.go:296] duration metric: took 151.429069ms for postStartSetup
	I0110 10:03:33.931462  505844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:03:33.948053  505844 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json ...
	I0110 10:03:33.948375  505844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:03:33.948429  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:33.966035  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:03:34.069875  505844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:03:34.074606  505844 start.go:128] duration metric: took 6.379247684s to createHost
	I0110 10:03:34.074633  505844 start.go:83] releasing machines lock for "no-preload-964204", held for 6.379373257s
	I0110 10:03:34.074709  505844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:03:34.091214  505844 ssh_runner.go:195] Run: cat /version.json
	I0110 10:03:34.091228  505844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:03:34.091266  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:34.091288  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:34.110672  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:03:34.114736  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:03:34.322909  505844 ssh_runner.go:195] Run: systemctl --version
	I0110 10:03:34.329279  505844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:03:34.362983  505844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:03:34.367204  505844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:03:34.367279  505844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:03:34.396886  505844 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:03:34.396925  505844 start.go:496] detecting cgroup driver to use...
	I0110 10:03:34.396959  505844 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:03:34.397020  505844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:03:34.423341  505844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:03:34.440462  505844 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:03:34.440535  505844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:03:34.462124  505844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:03:34.484007  505844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:03:34.598239  505844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:03:34.725310  505844 docker.go:234] disabling docker service ...
	I0110 10:03:34.725383  505844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:03:34.746969  505844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:03:34.761080  505844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:03:34.879103  505844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:03:34.996175  505844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:03:35.015549  505844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:03:35.031328  505844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:03:35.031453  505844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.040814  505844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:03:35.040895  505844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.051003  505844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.060854  505844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.070884  505844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:03:35.080659  505844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.090784  505844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.105909  505844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:03:35.115183  505844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:03:35.122991  505844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:03:35.130645  505844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:03:35.264466  505844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:03:35.433542  505844 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:03:35.433610  505844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:03:35.438042  505844 start.go:574] Will wait 60s for crictl version
	I0110 10:03:35.438109  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:35.441855  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:03:35.472180  505844 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:03:35.472325  505844 ssh_runner.go:195] Run: crio --version
	I0110 10:03:35.501728  505844 ssh_runner.go:195] Run: crio --version
	I0110 10:03:35.535322  505844 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:03:35.538175  505844 cli_runner.go:164] Run: docker network inspect no-preload-964204 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:03:35.557452  505844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:03:35.561190  505844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:03:35.570765  505844 kubeadm.go:884] updating cluster {Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:03:35.570886  505844 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:03:35.570934  505844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:03:35.596308  505844 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I0110 10:03:35.596334  505844 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0110 10:03:35.596368  505844 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:35.596597  505844 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:35.596693  505844 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:35.596788  505844 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:35.596874  505844 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:35.596951  505844 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I0110 10:03:35.597033  505844 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:35.597120  505844 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:35.599833  505844 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:35.600278  505844 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:35.600316  505844 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:35.600524  505844 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:35.600539  505844 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I0110 10:03:35.600595  505844 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:35.600662  505844 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:35.600741  505844 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:36.010887  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:36.023431  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I0110 10:03:36.044461  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:36.046616  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:36.051240  505844 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I0110 10:03:36.051335  505844 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:36.051433  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.064110  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:36.073607  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:36.077605  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:36.087781  505844 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I0110 10:03:36.087827  505844 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I0110 10:03:36.087882  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.158367  505844 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I0110 10:03:36.158418  505844 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:36.158466  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.174750  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:36.174808  505844 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I0110 10:03:36.174841  505844 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:36.174978  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.174915  505844 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I0110 10:03:36.175037  505844 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:36.175062  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.191259  505844 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I0110 10:03:36.191316  505844 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:36.191363  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.202564  505844 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I0110 10:03:36.202606  505844 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:36.202662  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:36.202742  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0110 10:03:36.202800  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:36.217782  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:36.217874  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:36.217933  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:36.218071  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:36.298906  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:36.298932  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:36.298979  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0110 10:03:36.321702  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:36.321773  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:36.321784  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:36.321825  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0110 10:03:36.381982  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0110 10:03:36.382062  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0110 10:03:36.382113  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:36.429115  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I0110 10:03:36.429243  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I0110 10:03:36.429329  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0110 10:03:36.429409  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0110 10:03:36.429494  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 10:03:36.480413  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0110 10:03:36.480543  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I0110 10:03:36.480621  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I0110 10:03:36.480701  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I0110 10:03:36.480759  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I0110 10:03:36.535782  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I0110 10:03:36.535901  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0110 10:03:36.535985  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I0110 10:03:36.536005  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I0110 10:03:36.536081  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I0110 10:03:36.536149  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I0110 10:03:36.536241  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0110 10:03:36.536314  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I0110 10:03:36.536413  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I0110 10:03:36.536431  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I0110 10:03:36.536474  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I0110 10:03:36.536552  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I0110 10:03:36.536571  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I0110 10:03:36.536690  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0110 10:03:36.571509  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I0110 10:03:36.571553  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I0110 10:03:36.571597  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I0110 10:03:36.571657  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I0110 10:03:36.579792  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I0110 10:03:36.579830  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I0110 10:03:36.579867  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I0110 10:03:36.579909  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	W0110 10:03:36.617327  505844 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0110 10:03:36.617390  505844 retry.go:84] will retry after 300ms: ssh: rejected: connect failed (open failed)
	I0110 10:03:36.625072  505844 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I0110 10:03:36.625170  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I0110 10:03:36.625255  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:03:36.651000  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	W0110 10:03:36.894884  505844 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0110 10:03:36.895131  505844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:37.321154  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I0110 10:03:37.321190  505844 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0110 10:03:37.321246  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0110 10:03:37.321252  505844 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0110 10:03:37.321281  505844 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:37.321331  505844 ssh_runner.go:195] Run: which crictl
	I0110 10:03:38.820970  505844 ssh_runner.go:235] Completed: which crictl: (1.499617676s)
	I0110 10:03:38.821095  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:38.821109  505844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.499846224s)
	I0110 10:03:38.821224  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I0110 10:03:38.821263  505844 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0110 10:03:38.821320  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0110 10:03:38.858038  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:40.019525  505844 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.161448127s)
	I0110 10:03:40.019616  505844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:03:40.021389  505844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.200003387s)
	I0110 10:03:40.021466  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I0110 10:03:40.021537  505844 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I0110 10:03:40.021635  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I0110 10:03:40.059369  505844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0110 10:03:40.059555  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0110 10:03:41.248727  505844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.227050502s)
	I0110 10:03:41.248759  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I0110 10:03:41.248767  505844 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.189175333s)
	I0110 10:03:41.248778  505844 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I0110 10:03:41.248789  505844 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0110 10:03:41.248812  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0110 10:03:41.248837  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I0110 10:03:43.123065  505844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.87420591s)
	I0110 10:03:43.123092  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I0110 10:03:43.123111  505844 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I0110 10:03:43.123158  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I0110 10:03:44.421673  505844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.29848655s)
	I0110 10:03:44.421702  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I0110 10:03:44.421723  505844 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0110 10:03:44.421785  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0110 10:03:45.871437  505844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.449621992s)
	I0110 10:03:45.871463  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I0110 10:03:45.871486  505844 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0110 10:03:45.871535  505844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0110 10:03:46.448638  505844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0110 10:03:46.448670  505844 cache_images.go:125] Successfully loaded all cached images
	I0110 10:03:46.448675  505844 cache_images.go:94] duration metric: took 10.852330045s to LoadCachedImages
	I0110 10:03:46.448687  505844 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:03:46.448774  505844 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-964204 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:03:46.448849  505844 ssh_runner.go:195] Run: crio config
	I0110 10:03:46.511595  505844 cni.go:84] Creating CNI manager for ""
	I0110 10:03:46.511670  505844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:03:46.511711  505844 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:03:46.511764  505844 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-964204 NodeName:no-preload-964204 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:03:46.511938  505844 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-964204"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:03:46.512052  505844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:03:46.520461  505844 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I0110 10:03:46.520547  505844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I0110 10:03:46.528621  505844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I0110 10:03:46.528728  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I0110 10:03:46.528830  505844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I0110 10:03:46.528862  505844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:03:46.528952  505844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I0110 10:03:46.529001  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I0110 10:03:46.536672  505844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I0110 10:03:46.536764  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I0110 10:03:46.537403  505844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I0110 10:03:46.537457  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I0110 10:03:46.551520  505844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I0110 10:03:46.590288  505844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I0110 10:03:46.590378  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	I0110 10:03:47.330503  505844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:03:47.338848  505844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:03:47.355121  505844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:03:47.372754  505844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0110 10:03:47.389640  505844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:03:47.393534  505844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:03:47.404612  505844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:03:47.518281  505844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:03:47.534475  505844 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204 for IP: 192.168.76.2
	I0110 10:03:47.534548  505844 certs.go:195] generating shared ca certs ...
	I0110 10:03:47.534581  505844 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:47.534758  505844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:03:47.534835  505844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:03:47.534870  505844 certs.go:257] generating profile certs ...
	I0110 10:03:47.534971  505844 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.key
	I0110 10:03:47.535008  505844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt with IP's: []
	I0110 10:03:47.937173  505844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt ...
	I0110 10:03:47.937222  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: {Name:mk9c6ff0b1e7aeb5e98bfffce76ef6c3cd9d53f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:47.937436  505844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.key ...
	I0110 10:03:47.937448  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.key: {Name:mkc6f0b2dead909ffb804edcc1c31847554731c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:47.937555  505844 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key.50e5be67
	I0110 10:03:47.937575  505844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt.50e5be67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 10:03:48.256438  505844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt.50e5be67 ...
	I0110 10:03:48.256474  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt.50e5be67: {Name:mk4cc3a68b9d1ebb881ebbec990c733ed2a96aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:48.256681  505844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key.50e5be67 ...
	I0110 10:03:48.256699  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key.50e5be67: {Name:mk3ed6cac149da3fad89292fdff9322b8f1b5965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:48.256789  505844 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt.50e5be67 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt
	I0110 10:03:48.256876  505844 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key.50e5be67 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key
	I0110 10:03:48.256938  505844 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key
	I0110 10:03:48.256956  505844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.crt with IP's: []
	I0110 10:03:48.477430  505844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.crt ...
	I0110 10:03:48.477464  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.crt: {Name:mkb12283642d67851245c99e023b42694b863b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:48.477651  505844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key ...
	I0110 10:03:48.477665  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key: {Name:mk2b15aeed5ee064ab542803e26ebdd4a2b67746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:03:48.477862  505844 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:03:48.477910  505844 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:03:48.477925  505844 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:03:48.477951  505844 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:03:48.477986  505844 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:03:48.478012  505844 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:03:48.478061  505844 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:03:48.478676  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:03:48.497836  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:03:48.516628  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:03:48.535094  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:03:48.554514  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:03:48.572286  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:03:48.590243  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:03:48.607735  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:03:48.625268  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:03:48.642849  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:03:48.662553  505844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:03:48.681651  505844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:03:48.694070  505844 ssh_runner.go:195] Run: openssl version
	I0110 10:03:48.700615  505844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:03:48.707945  505844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:03:48.715417  505844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:03:48.719665  505844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:03:48.719731  505844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:03:48.765612  505844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:03:48.773021  505844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:03:48.780341  505844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:03:48.787606  505844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:03:48.795646  505844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:03:48.799516  505844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:03:48.799588  505844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:03:48.840292  505844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:03:48.847854  505844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:03:48.855101  505844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:03:48.862530  505844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:03:48.870196  505844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:03:48.874251  505844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:03:48.874322  505844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:03:48.915217  505844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:03:48.922921  505844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:03:48.930310  505844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:03:48.933949  505844 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:03:48.934005  505844 kubeadm.go:401] StartCluster: {Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:03:48.934098  505844 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:03:48.934157  505844 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:03:48.963803  505844 cri.go:96] found id: ""
	I0110 10:03:48.963873  505844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:03:48.971736  505844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:03:48.979651  505844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:03:48.979719  505844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:03:48.987626  505844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:03:48.987650  505844 kubeadm.go:158] found existing configuration files:
	
	I0110 10:03:48.987705  505844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:03:48.995133  505844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:03:48.995208  505844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:03:49.002454  505844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:03:49.011384  505844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:03:49.011455  505844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:03:49.019013  505844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:03:49.026846  505844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:03:49.026962  505844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:03:49.034356  505844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:03:49.042063  505844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:03:49.042141  505844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:03:49.049537  505844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:03:49.087796  505844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:03:49.088147  505844 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:03:49.190382  505844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:03:49.190494  505844 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:03:49.190555  505844 kubeadm.go:319] OS: Linux
	I0110 10:03:49.190615  505844 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:03:49.190690  505844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:03:49.190769  505844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:03:49.190861  505844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:03:49.190940  505844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:03:49.191010  505844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:03:49.191057  505844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:03:49.191106  505844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:03:49.191179  505844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:03:49.256206  505844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:03:49.256399  505844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:03:49.256567  505844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:03:49.272899  505844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:03:49.281079  505844 out.go:252]   - Generating certificates and keys ...
	I0110 10:03:49.281240  505844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:03:49.281359  505844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:03:49.706549  505844 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:03:50.233933  505844 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:03:50.720970  505844 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:03:50.852723  505844 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:03:50.927790  505844 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:03:50.928338  505844 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-964204] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:03:51.400157  505844 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:03:51.400703  505844 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-964204] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:03:51.479015  505844 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:03:51.576984  505844 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:03:51.727703  505844 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:03:51.728042  505844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:03:51.805605  505844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:03:52.197020  505844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:03:52.671467  505844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:03:53.385927  505844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:03:53.478466  505844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:03:53.479112  505844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:03:53.481759  505844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:03:53.518303  505844 out.go:252]   - Booting up control plane ...
	I0110 10:03:53.518426  505844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:03:53.518507  505844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:03:53.518573  505844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:03:53.518677  505844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:03:53.518770  505844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:03:53.521668  505844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:03:53.521951  505844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:03:53.521995  505844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:03:53.662662  505844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:03:53.663115  505844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:03:54.164140  505844 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.597954ms
	I0110 10:03:54.167781  505844 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 10:03:54.167877  505844 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 10:03:54.168477  505844 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 10:03:54.168593  505844 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 10:03:55.676351  505844 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508085224s
	I0110 10:03:57.561576  505844 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.39371103s
	I0110 10:03:59.669956  505844 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502046388s
	I0110 10:03:59.703075  505844 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 10:03:59.720034  505844 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 10:03:59.732864  505844 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 10:03:59.733077  505844 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-964204 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 10:03:59.744403  505844 kubeadm.go:319] [bootstrap-token] Using token: a1ksq8.yuuaa5k764wnwmvj
	I0110 10:03:59.747527  505844 out.go:252]   - Configuring RBAC rules ...
	I0110 10:03:59.747655  505844 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 10:03:59.751545  505844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 10:03:59.762163  505844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 10:03:59.766434  505844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 10:03:59.770847  505844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 10:03:59.775001  505844 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 10:04:00.089812  505844 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 10:04:00.634750  505844 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 10:04:01.080256  505844 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 10:04:01.081438  505844 kubeadm.go:319] 
	I0110 10:04:01.081533  505844 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 10:04:01.081558  505844 kubeadm.go:319] 
	I0110 10:04:01.081650  505844 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 10:04:01.081659  505844 kubeadm.go:319] 
	I0110 10:04:01.081684  505844 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 10:04:01.081743  505844 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 10:04:01.081794  505844 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 10:04:01.081798  505844 kubeadm.go:319] 
	I0110 10:04:01.081852  505844 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 10:04:01.081856  505844 kubeadm.go:319] 
	I0110 10:04:01.081904  505844 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 10:04:01.081908  505844 kubeadm.go:319] 
	I0110 10:04:01.081959  505844 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 10:04:01.082044  505844 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 10:04:01.082114  505844 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 10:04:01.082118  505844 kubeadm.go:319] 
	I0110 10:04:01.082202  505844 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 10:04:01.082278  505844 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 10:04:01.082286  505844 kubeadm.go:319] 
	I0110 10:04:01.082370  505844 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a1ksq8.yuuaa5k764wnwmvj \
	I0110 10:04:01.082472  505844 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e \
	I0110 10:04:01.082492  505844 kubeadm.go:319] 	--control-plane 
	I0110 10:04:01.082496  505844 kubeadm.go:319] 
	I0110 10:04:01.082580  505844 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 10:04:01.082584  505844 kubeadm.go:319] 
	I0110 10:04:01.082667  505844 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a1ksq8.yuuaa5k764wnwmvj \
	I0110 10:04:01.082769  505844 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e 
	I0110 10:04:01.085679  505844 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:04:01.086104  505844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:04:01.086216  505844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:04:01.086235  505844 cni.go:84] Creating CNI manager for ""
	I0110 10:04:01.086242  505844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:04:01.089311  505844 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 10:04:01.092369  505844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 10:04:01.096901  505844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 10:04:01.096925  505844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 10:04:01.111685  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 10:04:01.432966  505844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 10:04:01.433114  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:01.433186  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-964204 minikube.k8s.io/updated_at=2026_01_10T10_04_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=no-preload-964204 minikube.k8s.io/primary=true
	I0110 10:04:01.587525  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:01.587625  505844 ops.go:34] apiserver oom_adj: -16
	I0110 10:04:02.088371  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:02.588430  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:03.087974  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:03.588376  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:04.087681  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:04.588429  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:05.087785  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:05.588056  505844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:04:05.721543  505844 kubeadm.go:1114] duration metric: took 4.288494909s to wait for elevateKubeSystemPrivileges
	I0110 10:04:05.721574  505844 kubeadm.go:403] duration metric: took 16.787577007s to StartCluster
	I0110 10:04:05.721591  505844 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:05.721650  505844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:05.722330  505844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:05.722536  505844 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:04:05.722647  505844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 10:04:05.722881  505844 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:05.722924  505844 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:04:05.722992  505844 addons.go:70] Setting storage-provisioner=true in profile "no-preload-964204"
	I0110 10:04:05.723007  505844 addons.go:239] Setting addon storage-provisioner=true in "no-preload-964204"
	I0110 10:04:05.723031  505844 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:05.723658  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:05.723907  505844 addons.go:70] Setting default-storageclass=true in profile "no-preload-964204"
	I0110 10:04:05.723930  505844 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-964204"
	I0110 10:04:05.724200  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:05.727770  505844 out.go:179] * Verifying Kubernetes components...
	I0110 10:04:05.730716  505844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:05.772739  505844 addons.go:239] Setting addon default-storageclass=true in "no-preload-964204"
	I0110 10:04:05.772802  505844 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:05.773366  505844 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:05.779715  505844 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:04:05.782682  505844 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:04:05.782723  505844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:04:05.782793  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:05.814366  505844 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:04:05.814394  505844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:04:05.814462  505844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:05.823670  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:05.866360  505844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33429 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:06.117342  505844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 10:04:06.117590  505844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:04:06.173832  505844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:04:06.210815  505844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:04:06.605663  505844 node_ready.go:35] waiting up to 6m0s for node "no-preload-964204" to be "Ready" ...
	I0110 10:04:06.605960  505844 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 10:04:07.082042  505844 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 10:04:07.084958  505844 addons.go:530] duration metric: took 1.362022428s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 10:04:07.111058  505844 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-964204" context rescaled to 1 replicas
	W0110 10:04:08.608814  505844 node_ready.go:57] node "no-preload-964204" has "Ready":"False" status (will retry)
	W0110 10:04:11.108835  505844 node_ready.go:57] node "no-preload-964204" has "Ready":"False" status (will retry)
	W0110 10:04:13.109123  505844 node_ready.go:57] node "no-preload-964204" has "Ready":"False" status (will retry)
	W0110 10:04:15.109823  505844 node_ready.go:57] node "no-preload-964204" has "Ready":"False" status (will retry)
	W0110 10:04:17.608765  505844 node_ready.go:57] node "no-preload-964204" has "Ready":"False" status (will retry)
	I0110 10:04:19.608323  505844 node_ready.go:49] node "no-preload-964204" is "Ready"
	I0110 10:04:19.608355  505844 node_ready.go:38] duration metric: took 13.002658094s for node "no-preload-964204" to be "Ready" ...
	I0110 10:04:19.608370  505844 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:04:19.608431  505844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:04:19.620151  505844 api_server.go:72] duration metric: took 13.897580072s to wait for apiserver process to appear ...
	I0110 10:04:19.620178  505844 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:04:19.620197  505844 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:04:19.628660  505844 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:04:19.630407  505844 api_server.go:141] control plane version: v1.35.0
	I0110 10:04:19.630436  505844 api_server.go:131] duration metric: took 10.249518ms to wait for apiserver health ...
	I0110 10:04:19.630445  505844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:04:19.637469  505844 system_pods.go:59] 8 kube-system pods found
	I0110 10:04:19.637509  505844 system_pods.go:61] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:19.637518  505844 system_pods.go:61] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running
	I0110 10:04:19.637524  505844 system_pods.go:61] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:19.637528  505844 system_pods.go:61] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running
	I0110 10:04:19.637535  505844 system_pods.go:61] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running
	I0110 10:04:19.637540  505844 system_pods.go:61] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:19.637545  505844 system_pods.go:61] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running
	I0110 10:04:19.637556  505844 system_pods.go:61] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:04:19.637564  505844 system_pods.go:74] duration metric: took 7.111279ms to wait for pod list to return data ...
	I0110 10:04:19.637579  505844 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:04:19.646155  505844 default_sa.go:45] found service account: "default"
	I0110 10:04:19.646184  505844 default_sa.go:55] duration metric: took 8.599068ms for default service account to be created ...
	I0110 10:04:19.646195  505844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:04:19.648861  505844 system_pods.go:86] 8 kube-system pods found
	I0110 10:04:19.648895  505844 system_pods.go:89] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:19.648902  505844 system_pods.go:89] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running
	I0110 10:04:19.648909  505844 system_pods.go:89] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:19.648914  505844 system_pods.go:89] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running
	I0110 10:04:19.648919  505844 system_pods.go:89] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running
	I0110 10:04:19.648923  505844 system_pods.go:89] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:19.648928  505844 system_pods.go:89] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running
	I0110 10:04:19.648936  505844 system_pods.go:89] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:04:19.648963  505844 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 10:04:19.947091  505844 system_pods.go:86] 8 kube-system pods found
	I0110 10:04:19.947176  505844 system_pods.go:89] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:19.947201  505844 system_pods.go:89] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running
	I0110 10:04:19.947247  505844 system_pods.go:89] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:19.947271  505844 system_pods.go:89] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running
	I0110 10:04:19.947293  505844 system_pods.go:89] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running
	I0110 10:04:19.947332  505844 system_pods.go:89] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:19.947360  505844 system_pods.go:89] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running
	I0110 10:04:19.947382  505844 system_pods.go:89] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Running
	I0110 10:04:19.947423  505844 system_pods.go:126] duration metric: took 301.220734ms to wait for k8s-apps to be running ...
	I0110 10:04:19.947450  505844 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:04:19.947538  505844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:04:19.963862  505844 system_svc.go:56] duration metric: took 16.402713ms WaitForService to wait for kubelet
	I0110 10:04:19.963893  505844 kubeadm.go:587] duration metric: took 14.241326898s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:04:19.963913  505844 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:04:19.967205  505844 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:04:19.967234  505844 node_conditions.go:123] node cpu capacity is 2
	I0110 10:04:19.967248  505844 node_conditions.go:105] duration metric: took 3.329528ms to run NodePressure ...
	I0110 10:04:19.967261  505844 start.go:242] waiting for startup goroutines ...
	I0110 10:04:19.967268  505844 start.go:247] waiting for cluster config update ...
	I0110 10:04:19.967279  505844 start.go:256] writing updated cluster config ...
	I0110 10:04:19.967567  505844 ssh_runner.go:195] Run: rm -f paused
	I0110 10:04:19.971248  505844 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:04:19.974818  505844 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nbrjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:20.979580  505844 pod_ready.go:94] pod "coredns-7d764666f9-nbrjs" is "Ready"
	I0110 10:04:20.979612  505844 pod_ready.go:86] duration metric: took 1.004765783s for pod "coredns-7d764666f9-nbrjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:20.981997  505844 pod_ready.go:83] waiting for pod "etcd-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:20.985875  505844 pod_ready.go:94] pod "etcd-no-preload-964204" is "Ready"
	I0110 10:04:20.985902  505844 pod_ready.go:86] duration metric: took 3.877047ms for pod "etcd-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:20.987848  505844 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:20.992007  505844 pod_ready.go:94] pod "kube-apiserver-no-preload-964204" is "Ready"
	I0110 10:04:20.992035  505844 pod_ready.go:86] duration metric: took 4.161489ms for pod "kube-apiserver-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:20.994491  505844 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:21.177897  505844 pod_ready.go:94] pod "kube-controller-manager-no-preload-964204" is "Ready"
	I0110 10:04:21.177923  505844 pod_ready.go:86] duration metric: took 183.409519ms for pod "kube-controller-manager-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:21.378403  505844 pod_ready.go:83] waiting for pod "kube-proxy-7f6q4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:21.778400  505844 pod_ready.go:94] pod "kube-proxy-7f6q4" is "Ready"
	I0110 10:04:21.778425  505844 pod_ready.go:86] duration metric: took 399.993617ms for pod "kube-proxy-7f6q4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:21.979168  505844 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:22.378675  505844 pod_ready.go:94] pod "kube-scheduler-no-preload-964204" is "Ready"
	I0110 10:04:22.378709  505844 pod_ready.go:86] duration metric: took 399.513964ms for pod "kube-scheduler-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:04:22.378722  505844 pod_ready.go:40] duration metric: took 2.40744097s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:04:22.438688  505844 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:04:22.442026  505844 out.go:203] 
	W0110 10:04:22.444909  505844 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:04:22.447964  505844 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:04:22.453898  505844 out.go:179] * Done! kubectl is now configured to use "no-preload-964204" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:04:19 no-preload-964204 crio[837]: time="2026-01-10T10:04:19.820732444Z" level=info msg="Created container 373cf595d7c57c922a828c9830b0800cd89e0b4c63af5101cb647df224b85bc6: kube-system/coredns-7d764666f9-nbrjs/coredns" id=ceaad623-08e7-4cb9-9d29-2dceb15c6751 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:04:19 no-preload-964204 crio[837]: time="2026-01-10T10:04:19.821752871Z" level=info msg="Starting container: 373cf595d7c57c922a828c9830b0800cd89e0b4c63af5101cb647df224b85bc6" id=9c9fa7c1-b374-4e9f-bf87-2b85a4b97238 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:04:19 no-preload-964204 crio[837]: time="2026-01-10T10:04:19.824676084Z" level=info msg="Started container" PID=2444 containerID=373cf595d7c57c922a828c9830b0800cd89e0b4c63af5101cb647df224b85bc6 description=kube-system/coredns-7d764666f9-nbrjs/coredns id=9c9fa7c1-b374-4e9f-bf87-2b85a4b97238 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d19ff7a2a8055c4fde943c945565f6bb92bbee9ef70215d805a6977fb780cd1
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.967785195Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cba55c9f-b016-491e-826f-e73539cb99fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.967915585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.973257356Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:18e0990034e81434f6c8ec8d52d16aef260ee0da87993806bfbb1cc54fed9124 UID:13b4c695-6efc-4c91-a4a4-379b8ac827e5 NetNS:/var/run/netns/ef1497e2-806e-4a6a-8c28-aac27eb3e593 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a51d78}] Aliases:map[]}"
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.973433801Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.988067129Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:18e0990034e81434f6c8ec8d52d16aef260ee0da87993806bfbb1cc54fed9124 UID:13b4c695-6efc-4c91-a4a4-379b8ac827e5 NetNS:/var/run/netns/ef1497e2-806e-4a6a-8c28-aac27eb3e593 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a51d78}] Aliases:map[]}"
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.98838303Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.991468698Z" level=info msg="Ran pod sandbox 18e0990034e81434f6c8ec8d52d16aef260ee0da87993806bfbb1cc54fed9124 with infra container: default/busybox/POD" id=cba55c9f-b016-491e-826f-e73539cb99fd name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.993736026Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e544f0f-fc88-42e2-944e-7deb4f7ee53b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.994009145Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0e544f0f-fc88-42e2-944e-7deb4f7ee53b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.994197275Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0e544f0f-fc88-42e2-944e-7deb4f7ee53b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.995730111Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b65a7ac8-1fe7-4190-9578-f3061368f99e name=/runtime.v1.ImageService/PullImage
	Jan 10 10:04:22 no-preload-964204 crio[837]: time="2026-01-10T10:04:22.996345151Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.255747651Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b65a7ac8-1fe7-4190-9578-f3061368f99e name=/runtime.v1.ImageService/PullImage
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.256842213Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06edadea-8179-449b-a9c9-32a1d8127bb3 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.258564516Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6eb2416f-baf0-46b5-9343-1f9ca52ce577 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.265509761Z" level=info msg="Creating container: default/busybox/busybox" id=c197bb60-49cf-4737-95f5-f7e2e3abaabb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.265652311Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.270510154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.271223106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.28641247Z" level=info msg="Created container ba88ee528864a13cac758ea2860fa1115cddcf565e2a35a848709c8b0998ec2b: default/busybox/busybox" id=c197bb60-49cf-4737-95f5-f7e2e3abaabb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.287323128Z" level=info msg="Starting container: ba88ee528864a13cac758ea2860fa1115cddcf565e2a35a848709c8b0998ec2b" id=728ca197-6efd-4da1-b12b-5e6fc05881e9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:04:25 no-preload-964204 crio[837]: time="2026-01-10T10:04:25.289838089Z" level=info msg="Started container" PID=2502 containerID=ba88ee528864a13cac758ea2860fa1115cddcf565e2a35a848709c8b0998ec2b description=default/busybox/busybox id=728ca197-6efd-4da1-b12b-5e6fc05881e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=18e0990034e81434f6c8ec8d52d16aef260ee0da87993806bfbb1cc54fed9124
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ba88ee528864a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   18e0990034e81       busybox                                     default
	373cf595d7c57       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   2d19ff7a2a805       coredns-7d764666f9-nbrjs                    kube-system
	89c5cea17d645       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   a3ba70dbc86eb       storage-provisioner                         kube-system
	71ebbeedc8a6b       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   ff48e386b3e27       kindnet-fmp9h                               kube-system
	7f70ac0fc30c5       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   e3dfcbf2bc482       kube-proxy-7f6q4                            kube-system
	47e7b63b6e39a       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      38 seconds ago      Running             kube-scheduler            0                   7b5803d3eea39       kube-scheduler-no-preload-964204            kube-system
	617b872070b4d       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      38 seconds ago      Running             kube-controller-manager   0                   76dfb049833dd       kube-controller-manager-no-preload-964204   kube-system
	08bc2aaface97       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      38 seconds ago      Running             kube-apiserver            0                   d73687a592307       kube-apiserver-no-preload-964204            kube-system
	9a2910b8d0a65       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      38 seconds ago      Running             etcd                      0                   581147cb428e9       etcd-no-preload-964204                      kube-system
	
	
	==> coredns [373cf595d7c57c922a828c9830b0800cd89e0b4c63af5101cb647df224b85bc6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48867 - 10573 "HINFO IN 6149273297564274835.4315098925962862306. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038071232s
	
	
	==> describe nodes <==
	Name:               no-preload-964204
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-964204
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=no-preload-964204
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-964204
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:04:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:04:31 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:04:31 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:04:31 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:04:31 +0000   Sat, 10 Jan 2026 10:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-964204
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                03ea2076-6e07-410a-8003-5ef363ddb41d
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-nbrjs                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-964204                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-fmp9h                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-964204             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-964204    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-7f6q4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-964204             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node no-preload-964204 event: Registered Node no-preload-964204 in Controller
	
	
	==> dmesg <==
	[Jan10 09:30] overlayfs: idmapped layers are currently not supported
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a2910b8d0a6572a01aa4d09b941b911e01c816c663d4f2de1e46c2deb2bc09d] <==
	{"level":"info","ts":"2026-01-10T10:03:54.808533Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:03:55.761811Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T10:03:55.761861Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T10:03:55.761927Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T10:03:55.761940Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:03:55.761956Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:03:55.763054Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:03:55.763089Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:03:55.763122Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T10:03:55.763131Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:03:55.764201Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:03:55.765284Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-964204 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:03:55.765497Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:03:55.765600Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:03:55.765694Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:03:55.765711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:03:55.766026Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:03:55.766366Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:03:55.766619Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:03:55.767917Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:03:55.769162Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:03:55.769609Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T10:03:55.769746Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T10:03:55.771094Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:03:55.771495Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:04:33 up  2:47,  0 user,  load average: 1.46, 1.51, 1.90
	Linux no-preload-964204 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [71ebbeedc8a6b73fd95e56c8e1f979c5532a15198b45610e634eebc1e3cd707e] <==
	I0110 10:04:09.117552       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:04:09.118007       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:04:09.118139       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:04:09.118158       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:04:09.118169       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:04:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:04:09.317726       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:04:09.318000       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:04:09.318056       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:04:09.318272       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 10:04:09.619120       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:04:09.619216       1 metrics.go:72] Registering metrics
	I0110 10:04:09.619291       1 controller.go:711] "Syncing nftables rules"
	I0110 10:04:19.318177       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:04:19.318233       1 main.go:301] handling current node
	I0110 10:04:29.317719       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:04:29.317750       1 main.go:301] handling current node
	
	
	==> kube-apiserver [08bc2aaface97e7b5d3d8b5c7015c22acd35bf5befc52bb531220371833caf84] <==
	E0110 10:03:57.613579       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0110 10:03:57.619230       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:03:57.620938       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:03:57.622553       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:03:57.633868       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:03:57.634033       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:03:57.816791       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:03:58.389079       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 10:03:58.395531       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 10:03:58.395553       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:03:59.140898       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:03:59.216907       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:03:59.291840       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 10:03:59.303082       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 10:03:59.304160       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:03:59.311304       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:03:59.522612       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:04:00.603296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:04:00.633208       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 10:04:00.674941       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 10:04:05.374518       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:04:05.425668       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:04:05.430522       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:04:05.523596       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0110 10:04:31.784681       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:42820: use of closed network connection
	
	
	==> kube-controller-manager [617b872070b4dc7f495791ba7556a1bd694e78fc3584741d9e4ea55c8f8f9274] <==
	I0110 10:04:04.349750       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.349768       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.350604       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.350635       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351087       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351144       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351184       1 range_allocator.go:177] "Sending events to api server"
	I0110 10:04:04.351213       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 10:04:04.351224       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:04:04.351228       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351528       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351604       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351638       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351670       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351697       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351722       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351841       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.351885       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.365168       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.371460       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-964204" podCIDRs=["10.244.0.0/24"]
	I0110 10:04:04.432665       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.442944       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:04.442993       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:04:04.443001       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:04:24.348724       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [7f70ac0fc30c5aa6ed866d7523ca7b008fbdc74318dd31056a98158c4629cdaf] <==
	I0110 10:04:06.176792       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:04:06.411614       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:04:06.557130       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:06.557178       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:04:06.557267       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:04:06.707111       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:04:06.707172       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:04:06.732155       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:04:06.732684       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:04:06.732699       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:04:06.734786       1 config.go:200] "Starting service config controller"
	I0110 10:04:06.734796       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:04:06.734812       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:04:06.734816       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:04:06.734827       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:04:06.734831       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:04:06.735517       1 config.go:309] "Starting node config controller"
	I0110 10:04:06.735525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:04:06.735531       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:04:06.865977       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:04:06.866019       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:04:06.866067       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [47e7b63b6e39af05c8a5154a57ca22a9023bcb2468365e55ddca07ea411fff18] <==
	E0110 10:03:57.579944       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 10:03:57.582588       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 10:03:57.582705       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 10:03:57.582807       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 10:03:57.582901       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 10:03:57.582982       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 10:03:57.583054       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 10:03:57.583379       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 10:03:57.583502       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 10:03:57.583700       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 10:03:57.584287       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 10:03:58.411218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 10:03:58.420409       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 10:03:58.436104       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 10:03:58.452216       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 10:03:58.469885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 10:03:58.476934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 10:03:58.513598       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 10:03:58.612757       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:03:58.744702       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 10:03:58.798738       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 10:03:58.831594       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 10:03:58.873035       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 10:03:59.109406       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I0110 10:04:00.854291       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:04:05 no-preload-964204 kubelet[1950]: I0110 10:04:05.691506    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e91c85ce-4c93-4059-99c2-94f99d1adf02-lib-modules\") pod \"kindnet-fmp9h\" (UID: \"e91c85ce-4c93-4059-99c2-94f99d1adf02\") " pod="kube-system/kindnet-fmp9h"
	Jan 10 10:04:05 no-preload-964204 kubelet[1950]: I0110 10:04:05.691547    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxlg\" (UniqueName: \"kubernetes.io/projected/e91c85ce-4c93-4059-99c2-94f99d1adf02-kube-api-access-rlxlg\") pod \"kindnet-fmp9h\" (UID: \"e91c85ce-4c93-4059-99c2-94f99d1adf02\") " pod="kube-system/kindnet-fmp9h"
	Jan 10 10:04:05 no-preload-964204 kubelet[1950]: I0110 10:04:05.710482    1950 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 10:04:05 no-preload-964204 kubelet[1950]: W0110 10:04:05.970145    1950 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/crio-ff48e386b3e27eea54f30ab7832b0223524bcf9fe10a6b82ae69929fa84f50cf WatchSource:0}: Error finding container ff48e386b3e27eea54f30ab7832b0223524bcf9fe10a6b82ae69929fa84f50cf: Status 404 returned error can't find the container with id ff48e386b3e27eea54f30ab7832b0223524bcf9fe10a6b82ae69929fa84f50cf
	Jan 10 10:04:07 no-preload-964204 kubelet[1950]: E0110 10:04:07.477897    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-964204" containerName="etcd"
	Jan 10 10:04:07 no-preload-964204 kubelet[1950]: I0110 10:04:07.523695    1950 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-7f6q4" podStartSLOduration=2.523679387 podStartE2EDuration="2.523679387s" podCreationTimestamp="2026-01-10 10:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:04:06.869717008 +0000 UTC m=+6.303081138" watchObservedRunningTime="2026-01-10 10:04:07.523679387 +0000 UTC m=+6.957043517"
	Jan 10 10:04:07 no-preload-964204 kubelet[1950]: E0110 10:04:07.717893    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-964204" containerName="kube-scheduler"
	Jan 10 10:04:08 no-preload-964204 kubelet[1950]: E0110 10:04:08.288161    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-964204" containerName="kube-controller-manager"
	Jan 10 10:04:12 no-preload-964204 kubelet[1950]: E0110 10:04:12.413449    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-964204" containerName="kube-apiserver"
	Jan 10 10:04:12 no-preload-964204 kubelet[1950]: I0110 10:04:12.431174    1950 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-fmp9h" podStartSLOduration=4.5459151129999995 podStartE2EDuration="7.431160252s" podCreationTimestamp="2026-01-10 10:04:05 +0000 UTC" firstStartedPulling="2026-01-10 10:04:06.002810969 +0000 UTC m=+5.436175108" lastFinishedPulling="2026-01-10 10:04:08.888056108 +0000 UTC m=+8.321420247" observedRunningTime="2026-01-10 10:04:09.857697147 +0000 UTC m=+9.291061278" watchObservedRunningTime="2026-01-10 10:04:12.431160252 +0000 UTC m=+11.864524391"
	Jan 10 10:04:17 no-preload-964204 kubelet[1950]: E0110 10:04:17.479550    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-964204" containerName="etcd"
	Jan 10 10:04:17 no-preload-964204 kubelet[1950]: E0110 10:04:17.726740    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-964204" containerName="kube-scheduler"
	Jan 10 10:04:18 no-preload-964204 kubelet[1950]: E0110 10:04:18.296262    1950 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-964204" containerName="kube-controller-manager"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: I0110 10:04:19.370215    1950 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: I0110 10:04:19.521699    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26b2eccf-72f4-4fee-bd27-95ab393ab006-config-volume\") pod \"coredns-7d764666f9-nbrjs\" (UID: \"26b2eccf-72f4-4fee-bd27-95ab393ab006\") " pod="kube-system/coredns-7d764666f9-nbrjs"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: I0110 10:04:19.521768    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0a72c05f-1ea6-4b65-a567-cdea38d0054d-tmp\") pod \"storage-provisioner\" (UID: \"0a72c05f-1ea6-4b65-a567-cdea38d0054d\") " pod="kube-system/storage-provisioner"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: I0110 10:04:19.521795    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzlp\" (UniqueName: \"kubernetes.io/projected/26b2eccf-72f4-4fee-bd27-95ab393ab006-kube-api-access-9qzlp\") pod \"coredns-7d764666f9-nbrjs\" (UID: \"26b2eccf-72f4-4fee-bd27-95ab393ab006\") " pod="kube-system/coredns-7d764666f9-nbrjs"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: I0110 10:04:19.521820    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfc84\" (UniqueName: \"kubernetes.io/projected/0a72c05f-1ea6-4b65-a567-cdea38d0054d-kube-api-access-pfc84\") pod \"storage-provisioner\" (UID: \"0a72c05f-1ea6-4b65-a567-cdea38d0054d\") " pod="kube-system/storage-provisioner"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: W0110 10:04:19.774219    1950 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/crio-2d19ff7a2a8055c4fde943c945565f6bb92bbee9ef70215d805a6977fb780cd1 WatchSource:0}: Error finding container 2d19ff7a2a8055c4fde943c945565f6bb92bbee9ef70215d805a6977fb780cd1: Status 404 returned error can't find the container with id 2d19ff7a2a8055c4fde943c945565f6bb92bbee9ef70215d805a6977fb780cd1
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: E0110 10:04:19.865413    1950 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nbrjs" containerName="coredns"
	Jan 10 10:04:19 no-preload-964204 kubelet[1950]: I0110 10:04:19.926729    1950 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-nbrjs" podStartSLOduration=14.926714025999999 podStartE2EDuration="14.926714026s" podCreationTimestamp="2026-01-10 10:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:04:19.905277461 +0000 UTC m=+19.338641608" watchObservedRunningTime="2026-01-10 10:04:19.926714026 +0000 UTC m=+19.360078157"
	Jan 10 10:04:20 no-preload-964204 kubelet[1950]: E0110 10:04:20.870476    1950 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nbrjs" containerName="coredns"
	Jan 10 10:04:20 no-preload-964204 kubelet[1950]: I0110 10:04:20.886723    1950 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.886706165 podStartE2EDuration="13.886706165s" podCreationTimestamp="2026-01-10 10:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:04:19.92857403 +0000 UTC m=+19.361938169" watchObservedRunningTime="2026-01-10 10:04:20.886706165 +0000 UTC m=+20.320070296"
	Jan 10 10:04:21 no-preload-964204 kubelet[1950]: E0110 10:04:21.872841    1950 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nbrjs" containerName="coredns"
	Jan 10 10:04:22 no-preload-964204 kubelet[1950]: I0110 10:04:22.740158    1950 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx2zc\" (UniqueName: \"kubernetes.io/projected/13b4c695-6efc-4c91-a4a4-379b8ac827e5-kube-api-access-sx2zc\") pod \"busybox\" (UID: \"13b4c695-6efc-4c91-a4a4-379b8ac827e5\") " pod="default/busybox"
	
	
	==> storage-provisioner [89c5cea17d645cf4ce2c6fab5211a6cb2e3b713a96ba0591221bd34257ffad90] <==
	I0110 10:04:19.781700       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:04:19.796112       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:04:19.796383       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:04:19.798861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:19.812402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:04:19.812605       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:04:19.813242       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d8770a8-2c32-4636-b869-a554550e1ab6", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-964204_72c9b3cf-6161-4ffc-9db0-0e1de67dc8ff became leader
	I0110 10:04:19.815451       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-964204_72c9b3cf-6161-4ffc-9db0-0e1de67dc8ff!
	W0110 10:04:19.846555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:19.856772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:04:19.918289       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-964204_72c9b3cf-6161-4ffc-9db0-0e1de67dc8ff!
	W0110 10:04:21.860027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:21.866896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:23.870143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:23.875054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:25.877928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:25.885166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:27.888395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:27.893538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:29.897202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:29.907291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:31.910462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:04:31.920229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-964204 -n no-preload-964204
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-964204 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-964204 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-964204 --alsologtostderr -v=1: exit status 80 (1.537953192s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-964204 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 10:05:47.525569  513048 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:05:47.525699  513048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:05:47.525709  513048 out.go:374] Setting ErrFile to fd 2...
	I0110 10:05:47.525714  513048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:05:47.526186  513048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:05:47.526599  513048 out.go:368] Setting JSON to false
	I0110 10:05:47.526620  513048 mustload.go:66] Loading cluster: no-preload-964204
	I0110 10:05:47.527303  513048 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:05:47.527780  513048 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:05:47.545420  513048 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:05:47.545762  513048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:05:47.609004  513048 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 10:05:47.598603147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:05:47.609714  513048 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-964204 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 10:05:47.615620  513048 out.go:179] * Pausing node no-preload-964204 ... 
	I0110 10:05:47.618800  513048 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:05:47.619135  513048 ssh_runner.go:195] Run: systemctl --version
	I0110 10:05:47.619189  513048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:05:47.635910  513048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:05:47.739631  513048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:05:47.761096  513048 pause.go:52] kubelet running: true
	I0110 10:05:47.761184  513048 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:05:48.020400  513048 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:05:48.020513  513048 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:05:48.088613  513048 cri.go:96] found id: "cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f"
	I0110 10:05:48.088635  513048 cri.go:96] found id: "abc42cffac590ed77549483d1a05755448e312a585d4204a268fc5e5f6a03e0a"
	I0110 10:05:48.088640  513048 cri.go:96] found id: "6d4368ac3242cdcba3ee7fb78eb2026a6111050fe391df24e78edf0b58cf778f"
	I0110 10:05:48.088643  513048 cri.go:96] found id: "fe92ff1c402a775ab835548cd8b9b6ed7a60eea52715b689a2348e008a515c33"
	I0110 10:05:48.088646  513048 cri.go:96] found id: "e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda"
	I0110 10:05:48.088649  513048 cri.go:96] found id: "b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1"
	I0110 10:05:48.088652  513048 cri.go:96] found id: "146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b"
	I0110 10:05:48.088655  513048 cri.go:96] found id: "95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29"
	I0110 10:05:48.088658  513048 cri.go:96] found id: "c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c"
	I0110 10:05:48.088664  513048 cri.go:96] found id: "2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	I0110 10:05:48.088667  513048 cri.go:96] found id: "5bba4f2fe6e9a36f29cd4910fddfe6aff4d4e5cb154e4bdf8fb68ba0e7ea0c95"
	I0110 10:05:48.088670  513048 cri.go:96] found id: ""
	I0110 10:05:48.088719  513048 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:05:48.099827  513048 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:05:48Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:05:48.234202  513048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:05:48.247562  513048 pause.go:52] kubelet running: false
	I0110 10:05:48.247661  513048 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:05:48.402487  513048 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:05:48.402566  513048 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:05:48.471325  513048 cri.go:96] found id: "cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f"
	I0110 10:05:48.471351  513048 cri.go:96] found id: "abc42cffac590ed77549483d1a05755448e312a585d4204a268fc5e5f6a03e0a"
	I0110 10:05:48.471357  513048 cri.go:96] found id: "6d4368ac3242cdcba3ee7fb78eb2026a6111050fe391df24e78edf0b58cf778f"
	I0110 10:05:48.471360  513048 cri.go:96] found id: "fe92ff1c402a775ab835548cd8b9b6ed7a60eea52715b689a2348e008a515c33"
	I0110 10:05:48.471364  513048 cri.go:96] found id: "e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda"
	I0110 10:05:48.471368  513048 cri.go:96] found id: "b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1"
	I0110 10:05:48.471370  513048 cri.go:96] found id: "146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b"
	I0110 10:05:48.471373  513048 cri.go:96] found id: "95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29"
	I0110 10:05:48.471376  513048 cri.go:96] found id: "c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c"
	I0110 10:05:48.471413  513048 cri.go:96] found id: "2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	I0110 10:05:48.471424  513048 cri.go:96] found id: "5bba4f2fe6e9a36f29cd4910fddfe6aff4d4e5cb154e4bdf8fb68ba0e7ea0c95"
	I0110 10:05:48.471427  513048 cri.go:96] found id: ""
	I0110 10:05:48.471489  513048 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:05:48.746278  513048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:05:48.759562  513048 pause.go:52] kubelet running: false
	I0110 10:05:48.759665  513048 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:05:48.921065  513048 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:05:48.921156  513048 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:05:48.988813  513048 cri.go:96] found id: "cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f"
	I0110 10:05:48.988835  513048 cri.go:96] found id: "abc42cffac590ed77549483d1a05755448e312a585d4204a268fc5e5f6a03e0a"
	I0110 10:05:48.988841  513048 cri.go:96] found id: "6d4368ac3242cdcba3ee7fb78eb2026a6111050fe391df24e78edf0b58cf778f"
	I0110 10:05:48.988862  513048 cri.go:96] found id: "fe92ff1c402a775ab835548cd8b9b6ed7a60eea52715b689a2348e008a515c33"
	I0110 10:05:48.988867  513048 cri.go:96] found id: "e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda"
	I0110 10:05:48.988871  513048 cri.go:96] found id: "b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1"
	I0110 10:05:48.988874  513048 cri.go:96] found id: "146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b"
	I0110 10:05:48.988877  513048 cri.go:96] found id: "95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29"
	I0110 10:05:48.988880  513048 cri.go:96] found id: "c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c"
	I0110 10:05:48.988886  513048 cri.go:96] found id: "2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	I0110 10:05:48.988895  513048 cri.go:96] found id: "5bba4f2fe6e9a36f29cd4910fddfe6aff4d4e5cb154e4bdf8fb68ba0e7ea0c95"
	I0110 10:05:48.988898  513048 cri.go:96] found id: ""
	I0110 10:05:48.988946  513048 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:05:49.003440  513048 out.go:203] 
	W0110 10:05:49.006924  513048 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:05:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:05:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 10:05:49.006957  513048 out.go:285] * 
	* 
	W0110 10:05:49.011062  513048 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:05:49.013274  513048 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-964204 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-964204
helpers_test.go:244: (dbg) docker inspect no-preload-964204:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98",
	        "Created": "2026-01-10T10:03:28.469288354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510494,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:04:46.800937069Z",
	            "FinishedAt": "2026-01-10T10:04:45.9771762Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/hosts",
	        "LogPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98-json.log",
	        "Name": "/no-preload-964204",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-964204:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-964204",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98",
	                "LowerDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-964204",
	                "Source": "/var/lib/docker/volumes/no-preload-964204/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-964204",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-964204",
	                "name.minikube.sigs.k8s.io": "no-preload-964204",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae199936b74e9127a19e2e837ec76ff42a5b99cb4e2005b3a0bee7c7e83a28ef",
	            "SandboxKey": "/var/run/docker/netns/ae199936b74e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-964204": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:c2:3b:c1:95:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23c88132d52b29689462f98c2dbfa4655b3eded5f2a83bfc6642616f52ac86e6",
	                    "EndpointID": "483fa5afc1c062a9d31d519e53081d02674babe65375cfeae9e7c584b882b4cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-964204",
	                        "d5228a313f58"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204: exit status 2 (339.548553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-964204 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-964204 logs -n 25: (1.295529241s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:04:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:04:46.527587  510366 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:04:46.527710  510366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:04:46.527721  510366 out.go:374] Setting ErrFile to fd 2...
	I0110 10:04:46.527728  510366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:04:46.528007  510366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:04:46.528382  510366 out.go:368] Setting JSON to false
	I0110 10:04:46.529253  510366 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10036,"bootTime":1768029451,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:04:46.529325  510366 start.go:143] virtualization:  
	I0110 10:04:46.534544  510366 out.go:179] * [no-preload-964204] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:04:46.537652  510366 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:04:46.537671  510366 notify.go:221] Checking for updates...
	I0110 10:04:46.543404  510366 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:04:46.546202  510366 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:46.549148  510366 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:04:46.551971  510366 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:04:46.554727  510366 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:04:46.558167  510366 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:46.558790  510366 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:04:46.593890  510366 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:04:46.594006  510366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:04:46.651080  510366 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:04:46.641909977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:04:46.651182  510366 docker.go:319] overlay module found
	I0110 10:04:46.654560  510366 out.go:179] * Using the docker driver based on existing profile
	I0110 10:04:46.657574  510366 start.go:309] selected driver: docker
	I0110 10:04:46.657597  510366 start.go:928] validating driver "docker" against &{Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:04:46.657700  510366 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:04:46.658468  510366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:04:46.713175  510366 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:04:46.704568936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:04:46.713501  510366 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:04:46.713539  510366 cni.go:84] Creating CNI manager for ""
	I0110 10:04:46.713599  510366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:04:46.713644  510366 start.go:353] cluster config:
	{Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:04:46.718669  510366 out.go:179] * Starting "no-preload-964204" primary control-plane node in "no-preload-964204" cluster
	I0110 10:04:46.721526  510366 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:04:46.724371  510366 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:04:46.727112  510366 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:04:46.727180  510366 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:04:46.727248  510366 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json ...
	I0110 10:04:46.727524  510366 cache.go:107] acquiring lock: {Name:mkaf98767e2a7d58e08cc2ca469eac45d26ab17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727604  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 10:04:46.727612  510366 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.144µs
	I0110 10:04:46.727621  510366 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 10:04:46.727632  510366 cache.go:107] acquiring lock: {Name:mk20f45a028e063162f8cd4bcc9049083b517dce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727661  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 10:04:46.727666  510366 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 35.603µs
	I0110 10:04:46.727672  510366 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 10:04:46.727681  510366 cache.go:107] acquiring lock: {Name:mk49f61dae811454fbbf5c86caa9b028b9c6fc70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727707  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 10:04:46.727713  510366 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 32.812µs
	I0110 10:04:46.727718  510366 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 10:04:46.727739  510366 cache.go:107] acquiring lock: {Name:mk1d8ad3a0da43b5820d3ac9775158ff65f73409 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727767  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 10:04:46.727771  510366 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 34.471µs
	I0110 10:04:46.727777  510366 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 10:04:46.727785  510366 cache.go:107] acquiring lock: {Name:mk27d75a0d283ab8c320b03d40025ce2f8416bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727810  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 10:04:46.727792  510366 cache.go:107] acquiring lock: {Name:mke106dc55e7252772391fff3ed3fce4c597722f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727831  510366 cache.go:107] acquiring lock: {Name:mk5e0c44af9753c2eb4284091ed19ea2384d8759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727861  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 10:04:46.727866  510366 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.432µs
	I0110 10:04:46.727874  510366 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 10:04:46.727870  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0110 10:04:46.727883  510366 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 98.996µs
	I0110 10:04:46.727888  510366 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 10:04:46.727821  510366 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.137µs
	I0110 10:04:46.727895  510366 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 10:04:46.727887  510366 cache.go:107] acquiring lock: {Name:mk025301e6f5fb7d9efce7266c9392491c803686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727924  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 10:04:46.727931  510366 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 51.947µs
	I0110 10:04:46.727939  510366 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 10:04:46.727946  510366 cache.go:87] Successfully saved all images to host disk.
	I0110 10:04:46.746589  510366 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:04:46.746610  510366 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:04:46.746632  510366 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:04:46.746664  510366 start.go:360] acquireMachinesLock for no-preload-964204: {Name:mk30268180d89419a4155580e5db2de74dfb3aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.746731  510366 start.go:364] duration metric: took 45.293µs to acquireMachinesLock for "no-preload-964204"
	I0110 10:04:46.746754  510366 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:04:46.746762  510366 fix.go:54] fixHost starting: 
	I0110 10:04:46.747033  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:46.763956  510366 fix.go:112] recreateIfNeeded on no-preload-964204: state=Stopped err=<nil>
	W0110 10:04:46.763996  510366 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 10:04:46.769157  510366 out.go:252] * Restarting existing docker container for "no-preload-964204" ...
	I0110 10:04:46.769281  510366 cli_runner.go:164] Run: docker start no-preload-964204
	I0110 10:04:47.027120  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:47.049362  510366 kic.go:430] container "no-preload-964204" state is running.
	I0110 10:04:47.049755  510366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:04:47.071531  510366 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json ...
	I0110 10:04:47.071762  510366 machine.go:94] provisionDockerMachine start ...
	I0110 10:04:47.071865  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:47.096406  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:47.096778  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:47.096788  510366 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:04:47.097451  510366 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:04:50.260274  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-964204
	
	I0110 10:04:50.260301  510366 ubuntu.go:182] provisioning hostname "no-preload-964204"
	I0110 10:04:50.260427  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:50.277769  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:50.278087  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:50.278106  510366 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-964204 && echo "no-preload-964204" | sudo tee /etc/hostname
	I0110 10:04:50.434404  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-964204
	
	I0110 10:04:50.434482  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:50.453437  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:50.453754  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:50.453776  510366 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-964204' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-964204/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-964204' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:04:50.600785  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:04:50.600852  510366 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:04:50.600889  510366 ubuntu.go:190] setting up certificates
	I0110 10:04:50.600919  510366 provision.go:84] configureAuth start
	I0110 10:04:50.601017  510366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:04:50.618672  510366 provision.go:143] copyHostCerts
	I0110 10:04:50.618738  510366 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:04:50.618755  510366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:04:50.618830  510366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:04:50.618936  510366 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:04:50.618942  510366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:04:50.618970  510366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:04:50.619032  510366 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:04:50.619037  510366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:04:50.619061  510366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:04:50.619114  510366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.no-preload-964204 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-964204]
	I0110 10:04:50.959539  510366 provision.go:177] copyRemoteCerts
	I0110 10:04:50.959659  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:04:50.959720  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:50.976344  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.080638  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:04:51.100247  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 10:04:51.119071  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:04:51.138156  510366 provision.go:87] duration metric: took 537.197129ms to configureAuth
	I0110 10:04:51.138186  510366 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:04:51.138389  510366 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:51.138498  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.156041  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:51.156360  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:51.156381  510366 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:04:51.501324  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:04:51.501346  510366 machine.go:97] duration metric: took 4.429567254s to provisionDockerMachine
	I0110 10:04:51.501359  510366 start.go:293] postStartSetup for "no-preload-964204" (driver="docker")
	I0110 10:04:51.501370  510366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:04:51.501434  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:04:51.501495  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.523255  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.628748  510366 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:04:51.632025  510366 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:04:51.632051  510366 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:04:51.632061  510366 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:04:51.632120  510366 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:04:51.632205  510366 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:04:51.632317  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:04:51.639641  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:04:51.661771  510366 start.go:296] duration metric: took 160.397735ms for postStartSetup
	I0110 10:04:51.661849  510366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:04:51.661905  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.680302  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.785455  510366 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:04:51.790049  510366 fix.go:56] duration metric: took 5.043280554s for fixHost
	I0110 10:04:51.790075  510366 start.go:83] releasing machines lock for "no-preload-964204", held for 5.043332361s
	I0110 10:04:51.790145  510366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:04:51.811704  510366 ssh_runner.go:195] Run: cat /version.json
	I0110 10:04:51.811759  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.812015  510366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:04:51.812076  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.829354  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.832763  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:52.034481  510366 ssh_runner.go:195] Run: systemctl --version
	I0110 10:04:52.041399  510366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:04:52.079286  510366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:04:52.083946  510366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:04:52.084020  510366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:04:52.092348  510366 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:04:52.092376  510366 start.go:496] detecting cgroup driver to use...
	I0110 10:04:52.092424  510366 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:04:52.092518  510366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:04:52.108797  510366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:04:52.122254  510366 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:04:52.122373  510366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:04:52.138433  510366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:04:52.151992  510366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:04:52.269409  510366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:04:52.379454  510366 docker.go:234] disabling docker service ...
	I0110 10:04:52.379562  510366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:04:52.394703  510366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:04:52.410852  510366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:04:52.543917  510366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:04:52.664637  510366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:04:52.677406  510366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:04:52.692019  510366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:04:52.692167  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.701070  510366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:04:52.701153  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.710183  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.719407  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.728620  510366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:04:52.737045  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.746160  510366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.754726  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.764080  510366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:04:52.771828  510366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:04:52.779618  510366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:52.886124  510366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:04:53.064101  510366 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:04:53.064181  510366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:04:53.068675  510366 start.go:574] Will wait 60s for crictl version
	I0110 10:04:53.068752  510366 ssh_runner.go:195] Run: which crictl
	I0110 10:04:53.072675  510366 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:04:53.098381  510366 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:04:53.098467  510366 ssh_runner.go:195] Run: crio --version
	I0110 10:04:53.126520  510366 ssh_runner.go:195] Run: crio --version
	I0110 10:04:53.160912  510366 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:04:53.163975  510366 cli_runner.go:164] Run: docker network inspect no-preload-964204 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:04:53.184353  510366 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:04:53.188475  510366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:04:53.207466  510366 kubeadm.go:884] updating cluster {Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:04:53.207573  510366 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:04:53.207963  510366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:04:53.256563  510366 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:04:53.256589  510366 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:04:53.256598  510366 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:04:53.256704  510366 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-964204 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:04:53.256788  510366 ssh_runner.go:195] Run: crio config
	I0110 10:04:53.309622  510366 cni.go:84] Creating CNI manager for ""
	I0110 10:04:53.309647  510366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:04:53.309669  510366 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:04:53.309693  510366 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-964204 NodeName:no-preload-964204 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:04:53.309823  510366 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-964204"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:04:53.309899  510366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:04:53.317846  510366 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:04:53.317912  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:04:53.325806  510366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:04:53.338632  510366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:04:53.350900  510366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0110 10:04:53.364108  510366 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:04:53.367778  510366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:04:53.377275  510366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:53.492989  510366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:04:53.510239  510366 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204 for IP: 192.168.76.2
	I0110 10:04:53.510265  510366 certs.go:195] generating shared ca certs ...
	I0110 10:04:53.510282  510366 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:53.510469  510366 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:04:53.510536  510366 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:04:53.510548  510366 certs.go:257] generating profile certs ...
	I0110 10:04:53.510654  510366 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.key
	I0110 10:04:53.510744  510366 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key.50e5be67
	I0110 10:04:53.510816  510366 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key
	I0110 10:04:53.510950  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:04:53.510995  510366 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:04:53.511008  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:04:53.511041  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:04:53.511084  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:04:53.511116  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:04:53.511176  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:04:53.511817  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:04:53.536318  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:04:53.557995  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:04:53.578379  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:04:53.604045  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:04:53.622153  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:04:53.640078  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:04:53.659424  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:04:53.681106  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:04:53.701870  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:04:53.726025  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:04:53.746239  510366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:04:53.761174  510366 ssh_runner.go:195] Run: openssl version
	I0110 10:04:53.768890  510366 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.776597  510366 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:04:53.784277  510366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.788298  510366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.788393  510366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.829628  510366 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:04:53.837254  510366 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.844967  510366 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:04:53.853981  510366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.857972  510366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.858091  510366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.899466  510366 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:04:53.910409  510366 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.919978  510366 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:04:53.927692  510366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.932092  510366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.932193  510366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.976752  510366 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:04:53.984200  510366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:04:53.988012  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:04:54.030216  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:04:54.071502  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:04:54.112752  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:04:54.154467  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:04:54.210619  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:04:54.278756  510366 kubeadm.go:401] StartCluster: {Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:04:54.278902  510366 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:04:54.278995  510366 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:04:54.352119  510366 cri.go:96] found id: "b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1"
	I0110 10:04:54.352190  510366 cri.go:96] found id: "146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b"
	I0110 10:04:54.352209  510366 cri.go:96] found id: "95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29"
	I0110 10:04:54.352229  510366 cri.go:96] found id: "c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c"
	I0110 10:04:54.352276  510366 cri.go:96] found id: ""
	I0110 10:04:54.352360  510366 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:04:54.364483  510366 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:04:54Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:04:54.364631  510366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:04:54.378270  510366 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:04:54.378347  510366 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:04:54.378435  510366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:04:54.386013  510366 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:04:54.386506  510366 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-964204" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:54.386664  510366 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-964204" cluster setting kubeconfig missing "no-preload-964204" context setting]
	I0110 10:04:54.387018  510366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:54.388582  510366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:04:54.399029  510366 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:04:54.399108  510366 kubeadm.go:602] duration metric: took 20.741385ms to restartPrimaryControlPlane
	I0110 10:04:54.399132  510366 kubeadm.go:403] duration metric: took 120.386073ms to StartCluster
	I0110 10:04:54.399177  510366 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:54.399267  510366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:54.399896  510366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:54.400152  510366 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:04:54.400565  510366 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:04:54.400656  510366 addons.go:70] Setting storage-provisioner=true in profile "no-preload-964204"
	I0110 10:04:54.400679  510366 addons.go:239] Setting addon storage-provisioner=true in "no-preload-964204"
	W0110 10:04:54.400689  510366 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:04:54.400713  510366 addons.go:70] Setting dashboard=true in profile "no-preload-964204"
	I0110 10:04:54.400793  510366 addons.go:239] Setting addon dashboard=true in "no-preload-964204"
	W0110 10:04:54.400818  510366 addons.go:248] addon dashboard should already be in state true
	I0110 10:04:54.400972  510366 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:54.401726  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.400716  510366 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:54.400723  510366 addons.go:70] Setting default-storageclass=true in profile "no-preload-964204"
	I0110 10:04:54.402286  510366 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-964204"
	I0110 10:04:54.402502  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.402558  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.405249  510366 out.go:179] * Verifying Kubernetes components...
	I0110 10:04:54.400624  510366 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:54.408708  510366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:54.457557  510366 addons.go:239] Setting addon default-storageclass=true in "no-preload-964204"
	W0110 10:04:54.457579  510366 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:04:54.457603  510366 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:54.458007  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.458204  510366 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:04:54.467206  510366 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:04:54.470142  510366 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:04:54.470273  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:04:54.470286  510366 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:04:54.470367  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:54.476296  510366 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:04:54.476321  510366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:04:54.476393  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:54.500540  510366 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:04:54.500562  510366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:04:54.500626  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:54.532650  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:54.533138  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:54.550258  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:54.805159  510366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:04:54.831693  510366 node_ready.go:35] waiting up to 6m0s for node "no-preload-964204" to be "Ready" ...
	I0110 10:04:54.844792  510366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:04:54.873649  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:04:54.873669  510366 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:04:54.879986  510366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:04:54.928845  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:04:54.928870  510366 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:04:55.007307  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:04:55.007337  510366 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:04:55.058493  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:04:55.058518  510366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:04:55.071557  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:04:55.071581  510366 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:04:55.089300  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:04:55.089332  510366 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:04:55.106779  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:04:55.106801  510366 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:04:55.130887  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:04:55.130913  510366 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:04:55.153782  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:04:55.153804  510366 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:04:55.170860  510366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:04:57.497642  510366 node_ready.go:49] node "no-preload-964204" is "Ready"
	I0110 10:04:57.497675  510366 node_ready.go:38] duration metric: took 2.665949206s for node "no-preload-964204" to be "Ready" ...
	I0110 10:04:57.497689  510366 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:04:57.497750  510366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:04:57.764716  510366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.919883832s)
	I0110 10:04:59.234218  510366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.354195746s)
	I0110 10:04:59.234333  510366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.063443393s)
	I0110 10:04:59.234526  510366 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.736758914s)
	I0110 10:04:59.234545  510366 api_server.go:72] duration metric: took 4.834338458s to wait for apiserver process to appear ...
	I0110 10:04:59.234552  510366 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:04:59.234583  510366 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:04:59.237524  510366 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-964204 addons enable metrics-server
	
	I0110 10:04:59.240630  510366 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I0110 10:04:59.243940  510366 addons.go:530] duration metric: took 4.843380822s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I0110 10:04:59.245232  510366 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:04:59.246407  510366 api_server.go:141] control plane version: v1.35.0
	I0110 10:04:59.246441  510366 api_server.go:131] duration metric: took 11.881924ms to wait for apiserver health ...
	I0110 10:04:59.246450  510366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:04:59.250658  510366 system_pods.go:59] 8 kube-system pods found
	I0110 10:04:59.250715  510366 system_pods.go:61] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:59.250727  510366 system_pods.go:61] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:04:59.250736  510366 system_pods.go:61] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:59.250744  510366 system_pods.go:61] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:04:59.250751  510366 system_pods.go:61] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:04:59.250756  510366 system_pods.go:61] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:59.250763  510366 system_pods.go:61] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:04:59.250768  510366 system_pods.go:61] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Running
	I0110 10:04:59.250775  510366 system_pods.go:74] duration metric: took 4.318685ms to wait for pod list to return data ...
	I0110 10:04:59.250783  510366 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:04:59.253556  510366 default_sa.go:45] found service account: "default"
	I0110 10:04:59.253578  510366 default_sa.go:55] duration metric: took 2.789607ms for default service account to be created ...
	I0110 10:04:59.253588  510366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:04:59.257027  510366 system_pods.go:86] 8 kube-system pods found
	I0110 10:04:59.257107  510366 system_pods.go:89] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:59.257138  510366 system_pods.go:89] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:04:59.257174  510366 system_pods.go:89] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:59.257197  510366 system_pods.go:89] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:04:59.257221  510366 system_pods.go:89] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:04:59.257245  510366 system_pods.go:89] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:59.257278  510366 system_pods.go:89] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:04:59.257302  510366 system_pods.go:89] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Running
	I0110 10:04:59.257327  510366 system_pods.go:126] duration metric: took 3.732561ms to wait for k8s-apps to be running ...
	I0110 10:04:59.257356  510366 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:04:59.257426  510366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:04:59.271328  510366 system_svc.go:56] duration metric: took 13.964593ms WaitForService to wait for kubelet
	I0110 10:04:59.271412  510366 kubeadm.go:587] duration metric: took 4.871186002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:04:59.271448  510366 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:04:59.274728  510366 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:04:59.274759  510366 node_conditions.go:123] node cpu capacity is 2
	I0110 10:04:59.274773  510366 node_conditions.go:105] duration metric: took 3.302302ms to run NodePressure ...
	I0110 10:04:59.274786  510366 start.go:242] waiting for startup goroutines ...
	I0110 10:04:59.274794  510366 start.go:247] waiting for cluster config update ...
	I0110 10:04:59.274805  510366 start.go:256] writing updated cluster config ...
	I0110 10:04:59.275121  510366 ssh_runner.go:195] Run: rm -f paused
	I0110 10:04:59.279437  510366 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:04:59.282918  510366 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nbrjs" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 10:05:01.288998  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:03.788560  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:06.289928  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:08.788731  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:11.289744  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:13.788490  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:15.788721  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:18.289330  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:20.293229  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:22.789152  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:25.289050  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:27.788313  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:30.288701  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:32.788201  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	I0110 10:05:34.287707  510366 pod_ready.go:94] pod "coredns-7d764666f9-nbrjs" is "Ready"
	I0110 10:05:34.287739  510366 pod_ready.go:86] duration metric: took 35.004796798s for pod "coredns-7d764666f9-nbrjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.290345  510366 pod_ready.go:83] waiting for pod "etcd-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.294279  510366 pod_ready.go:94] pod "etcd-no-preload-964204" is "Ready"
	I0110 10:05:34.294308  510366 pod_ready.go:86] duration metric: took 3.936058ms for pod "etcd-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.296592  510366 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.300913  510366 pod_ready.go:94] pod "kube-apiserver-no-preload-964204" is "Ready"
	I0110 10:05:34.300939  510366 pod_ready.go:86] duration metric: took 4.323616ms for pod "kube-apiserver-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.303216  510366 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.486370  510366 pod_ready.go:94] pod "kube-controller-manager-no-preload-964204" is "Ready"
	I0110 10:05:34.486397  510366 pod_ready.go:86] duration metric: took 183.154945ms for pod "kube-controller-manager-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.686518  510366 pod_ready.go:83] waiting for pod "kube-proxy-7f6q4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.087004  510366 pod_ready.go:94] pod "kube-proxy-7f6q4" is "Ready"
	I0110 10:05:35.087093  510366 pod_ready.go:86] duration metric: took 400.548456ms for pod "kube-proxy-7f6q4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.286387  510366 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.686453  510366 pod_ready.go:94] pod "kube-scheduler-no-preload-964204" is "Ready"
	I0110 10:05:35.686480  510366 pod_ready.go:86] duration metric: took 400.065554ms for pod "kube-scheduler-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.686494  510366 pod_ready.go:40] duration metric: took 36.40702222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:05:35.739678  510366 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:05:35.743307  510366 out.go:203] 
	W0110 10:05:35.746687  510366 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:05:35.749956  510366 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:05:35.753278  510366 out.go:179] * Done! kubectl is now configured to use "no-preload-964204" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:05:23 no-preload-964204 conmon[1661]: conmon 2d63e97a45900511ce73 <ninfo>: container 1663 exited with status 1
	Jan 10 10:05:23 no-preload-964204 crio[661]: time="2026-01-10T10:05:23.853520956Z" level=info msg="Removing container: f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52" id=8f90c038-9d29-45d0-9920-43ad33ed8182 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:05:23 no-preload-964204 crio[661]: time="2026-01-10T10:05:23.862583456Z" level=info msg="Error loading conmon cgroup of container f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52: cgroup deleted" id=8f90c038-9d29-45d0-9920-43ad33ed8182 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:05:23 no-preload-964204 crio[661]: time="2026-01-10T10:05:23.866087837Z" level=info msg="Removed container f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l/dashboard-metrics-scraper" id=8f90c038-9d29-45d0-9920-43ad33ed8182 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:05:28 no-preload-964204 conmon[1161]: conmon e5367860812c0fd9dbc4 <ninfo>: container 1170 exited with status 1
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.866991953Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7dc13d82-c9d3-48fe-92d2-71e0f754c775 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.868341057Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a8bf03da-9df8-46b6-9e88-aa4c2e4af4ff name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.869391557Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dc544324-1150-4379-8241-fd5472e14fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.869499759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.874418714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.87459388Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/513677cb66801c2627fc9495c678f0aa9416c2dc134933d155dc41312bbd526f/merged/etc/passwd: no such file or directory"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.874614623Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/513677cb66801c2627fc9495c678f0aa9416c2dc134933d155dc41312bbd526f/merged/etc/group: no such file or directory"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.874869428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.897910116Z" level=info msg="Created container cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f: kube-system/storage-provisioner/storage-provisioner" id=dc544324-1150-4379-8241-fd5472e14fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.901088044Z" level=info msg="Starting container: cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f" id=47fcc7e7-7dbf-44ce-b194-b7c7d8f1eae0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.908910527Z" level=info msg="Started container" PID=1675 containerID=cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f description=kube-system/storage-provisioner/storage-provisioner id=47fcc7e7-7dbf-44ce-b194-b7c7d8f1eae0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14b557a1126a5d54db45ef11d021726baa38ea7dcfbc3b82d323c99e3c1f91bc
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.651856712Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.651892831Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.656359571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.656394575Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.66078302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.660816572Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.660838357Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.664850379Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.664890864Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cd918a01d2e1b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   14b557a1126a5       storage-provisioner                          kube-system
	2d63e97a45900       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   911cb839e2796       dashboard-metrics-scraper-867fb5f87b-f4z7l   kubernetes-dashboard
	5bba4f2fe6e9a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   3e1ad88d4212f       kubernetes-dashboard-b84665fb8-6m4km         kubernetes-dashboard
	abc42cffac590       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           51 seconds ago      Running             coredns                     1                   83cf949b77619       coredns-7d764666f9-nbrjs                     kube-system
	c7dd71b3888f7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   30a621d83c17c       busybox                                      default
	6d4368ac3242c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           51 seconds ago      Running             kindnet-cni                 1                   8d238b81170f1       kindnet-fmp9h                                kube-system
	fe92ff1c402a7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           51 seconds ago      Running             kube-proxy                  1                   812c93dcafcf1       kube-proxy-7f6q4                             kube-system
	e5367860812c0       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   14b557a1126a5       storage-provisioner                          kube-system
	b27a68f656d95       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           55 seconds ago      Running             kube-scheduler              1                   0709fe6d06ad3       kube-scheduler-no-preload-964204             kube-system
	146888a99c32f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           55 seconds ago      Running             kube-apiserver              1                   ed6f856e76c1f       kube-apiserver-no-preload-964204             kube-system
	95f695558eee3       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           55 seconds ago      Running             etcd                        1                   33838eeeb5c20       etcd-no-preload-964204                       kube-system
	c58341e383779       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           55 seconds ago      Running             kube-controller-manager     1                   377f5dde7f990       kube-controller-manager-no-preload-964204    kube-system
	
	
	==> coredns [abc42cffac590ed77549483d1a05755448e312a585d4204a268fc5e5f6a03e0a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51141 - 12533 "HINFO IN 5204147763035130131.4557860035770675533. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043648402s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-964204
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-964204
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=no-preload-964204
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-964204
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:05:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-964204
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                03ea2076-6e07-410a-8003-5ef363ddb41d
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-nbrjs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-no-preload-964204                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         110s
	  kube-system                 kindnet-fmp9h                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-964204              250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-964204     200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-7f6q4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-964204              100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-f4z7l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6m4km          0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node no-preload-964204 event: Registered Node no-preload-964204 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node no-preload-964204 event: Registered Node no-preload-964204 in Controller
	
	
	==> dmesg <==
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29] <==
	{"level":"info","ts":"2026-01-10T10:04:54.614983Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:04:54.615032Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:04:54.615245Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:04:54.615256Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:04:54.616057Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:04:54.616109Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:04:54.616180Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:04:54.656974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:04:54.657058Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:04:54.657118Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:04:54.657132Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:04:54.657147Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.658163Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.658187Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:04:54.658232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.658244Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.661723Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-964204 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:04:54.661767Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:04:54.662691Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:04:54.682520Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:04:54.685801Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:04:54.686866Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:04:54.691790Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:04:54.692917Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:04:54.692985Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:05:50 up  2:48,  0 user,  load average: 1.29, 1.47, 1.86
	Linux no-preload-964204 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6d4368ac3242cdcba3ee7fb78eb2026a6111050fe391df24e78edf0b58cf778f] <==
	I0110 10:04:58.452395       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:04:58.473071       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:04:58.473240       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:04:58.473253       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:04:58.473268       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:04:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:04:58.647328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:04:58.647398       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:04:58.647432       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:04:58.649339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:05:28.647685       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 10:05:28.649780       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:05:28.649856       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:05:28.649877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0110 10:05:29.949547       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:05:29.949585       1 metrics.go:72] Registering metrics
	I0110 10:05:29.949636       1 controller.go:711] "Syncing nftables rules"
	I0110 10:05:38.646935       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:05:38.646975       1 main.go:301] handling current node
	I0110 10:05:48.656564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:05:48.656667       1 main.go:301] handling current node
	
	
	==> kube-apiserver [146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b] <==
	I0110 10:04:57.604678       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 10:04:57.604705       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 10:04:57.605271       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 10:04:57.605975       1 aggregator.go:187] initial CRD sync complete...
	I0110 10:04:57.605990       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 10:04:57.605996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:04:57.606001       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:04:57.606160       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:57.606187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 10:04:57.632660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 10:04:57.634143       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 10:04:57.660398       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:04:57.672995       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:04:57.674302       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:04:57.852784       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:04:58.339968       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:04:58.814962       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:04:58.934369       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:04:58.974750       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:04:58.989781       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:04:59.113179       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.185.78"}
	I0110 10:04:59.133429       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.196.59"}
	I0110 10:05:01.069158       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:05:01.169135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:05:01.268660       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c] <==
	I0110 10:05:00.677124       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-964204"
	I0110 10:05:00.677211       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 10:05:00.685000       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.686153       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.686796       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687582       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687691       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687843       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687937       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688288       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688406       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688465       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688584       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688952       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.689089       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.689485       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.689635       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.691090       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.692644       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.724247       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:05:00.750617       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.783649       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.783754       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:05:00.783788       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:05:00.824775       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [fe92ff1c402a775ab835548cd8b9b6ed7a60eea52715b689a2348e008a515c33] <==
	I0110 10:04:58.709796       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:04:58.921909       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:04:59.036870       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:59.036913       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:04:59.036981       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:04:59.082735       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:04:59.082895       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:04:59.088323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:04:59.088912       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:04:59.089148       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:04:59.090407       1 config.go:200] "Starting service config controller"
	I0110 10:04:59.090484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:04:59.090538       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:04:59.090586       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:04:59.090643       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:04:59.090677       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:04:59.091498       1 config.go:309] "Starting node config controller"
	I0110 10:04:59.091556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:04:59.091586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:04:59.190621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:04:59.190691       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 10:04:59.190940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1] <==
	I0110 10:04:55.779116       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:04:57.490851       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:04:57.490879       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:04:57.490888       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:04:57.490896       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:04:57.592082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:04:57.592126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:04:57.597066       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:04:57.597102       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:04:57.597284       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:04:57.597364       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:04:57.698087       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:05:11 no-preload-964204 kubelet[782]: E0110 10:05:11.821052     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:11 no-preload-964204 kubelet[782]: I0110 10:05:11.821072     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:11 no-preload-964204 kubelet[782]: E0110 10:05:11.821222     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:12 no-preload-964204 kubelet[782]: E0110 10:05:12.825222     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:12 no-preload-964204 kubelet[782]: I0110 10:05:12.825724     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:12 no-preload-964204 kubelet[782]: E0110 10:05:12.825965     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: E0110 10:05:13.827694     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: I0110 10:05:13.827740     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: E0110 10:05:13.827908     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: E0110 10:05:13.943908     782 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-964204" containerName="kube-apiserver"
	Jan 10 10:05:14 no-preload-964204 kubelet[782]: E0110 10:05:14.830218     782 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-964204" containerName="kube-apiserver"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: E0110 10:05:23.722062     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: I0110 10:05:23.722100     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: I0110 10:05:23.851856     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: E0110 10:05:23.852152     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: I0110 10:05:23.852182     782 scope.go:122] "RemoveContainer" containerID="2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: E0110 10:05:23.852342     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:28 no-preload-964204 kubelet[782]: I0110 10:05:28.866365     782 scope.go:122] "RemoveContainer" containerID="e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda"
	Jan 10 10:05:32 no-preload-964204 kubelet[782]: E0110 10:05:32.471874     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:32 no-preload-964204 kubelet[782]: I0110 10:05:32.471926     782 scope.go:122] "RemoveContainer" containerID="2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	Jan 10 10:05:32 no-preload-964204 kubelet[782]: E0110 10:05:32.472087     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:34 no-preload-964204 kubelet[782]: E0110 10:05:34.077079     782 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nbrjs" containerName="coredns"
	Jan 10 10:05:47 no-preload-964204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:05:48 no-preload-964204 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:05:48 no-preload-964204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5bba4f2fe6e9a36f29cd4910fddfe6aff4d4e5cb154e4bdf8fb68ba0e7ea0c95] <==
	2026/01/10 10:05:06 Starting overwatch
	2026/01/10 10:05:06 Using namespace: kubernetes-dashboard
	2026/01/10 10:05:06 Using in-cluster config to connect to apiserver
	2026/01/10 10:05:06 Using secret token for csrf signing
	2026/01/10 10:05:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:05:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:05:06 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 10:05:06 Generating JWE encryption key
	2026/01/10 10:05:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:05:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:05:06 Initializing JWE encryption key from synchronized object
	2026/01/10 10:05:06 Creating in-cluster Sidecar client
	2026/01/10 10:05:06 Serving insecurely on HTTP port: 9090
	2026/01/10 10:05:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:05:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f] <==
	I0110 10:05:28.923638       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:05:28.939580       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:05:28.939875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:05:28.942085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:32.397641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:36.657848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:40.256043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:43.309832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:46.332479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:46.338306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:05:46.338518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:05:46.338709       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-964204_1e5d4e18-f450-45ed-91c1-eb0acc5f47da!
	I0110 10:05:46.339214       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d8770a8-2c32-4636-b869-a554550e1ab6", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-964204_1e5d4e18-f450-45ed-91c1-eb0acc5f47da became leader
	W0110 10:05:46.342575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:46.350215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:05:46.439727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-964204_1e5d4e18-f450-45ed-91c1-eb0acc5f47da!
	W0110 10:05:48.353238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:48.359614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:50.363159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:50.369892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda] <==
	I0110 10:04:58.290456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:05:28.291987       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-964204 -n no-preload-964204
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-964204 -n no-preload-964204: exit status 2 (369.436861ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-964204 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-964204
helpers_test.go:244: (dbg) docker inspect no-preload-964204:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98",
	        "Created": "2026-01-10T10:03:28.469288354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510494,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:04:46.800937069Z",
	            "FinishedAt": "2026-01-10T10:04:45.9771762Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/hosts",
	        "LogPath": "/var/lib/docker/containers/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98/d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98-json.log",
	        "Name": "/no-preload-964204",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-964204:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-964204",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5228a313f586d9ea24f3d7551652c6f5101f3655daa318f31f8bd18c9dbbc98",
	                "LowerDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb91a76218e89afe839cf42d578cf786102a94ce218fad5f4d5bfbb914e92fe5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-964204",
	                "Source": "/var/lib/docker/volumes/no-preload-964204/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-964204",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-964204",
	                "name.minikube.sigs.k8s.io": "no-preload-964204",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae199936b74e9127a19e2e837ec76ff42a5b99cb4e2005b3a0bee7c7e83a28ef",
	            "SandboxKey": "/var/run/docker/netns/ae199936b74e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-964204": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:c2:3b:c1:95:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "23c88132d52b29689462f98c2dbfa4655b3eded5f2a83bfc6642616f52ac86e6",
	                    "EndpointID": "483fa5afc1c062a9d31d519e53081d02674babe65375cfeae9e7c584b882b4cf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-964204",
	                        "d5228a313f58"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204: exit status 2 (349.843248ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-964204 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-964204 logs -n 25: (1.289382889s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:54 UTC │ 10 Jan 26 09:54 UTC │
	│ start   │ -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:57 UTC │ 10 Jan 26 09:58 UTC │
	│ delete  │ -p cert-expiration-599529                                                                                                                                                                                                                     │ cert-expiration-599529    │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │ 10 Jan 26 09:58 UTC │
	│ start   │ -p force-systemd-flag-524845 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524845 │ jenkins │ v1.37.0 │ 10 Jan 26 09:58 UTC │                     │
	│ delete  │ -p force-systemd-env-646877                                                                                                                                                                                                                   │ force-systemd-env-646877  │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ cert-options-525619 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ ssh     │ -p cert-options-525619 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486    │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204         │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:04:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:04:46.527587  510366 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:04:46.527710  510366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:04:46.527721  510366 out.go:374] Setting ErrFile to fd 2...
	I0110 10:04:46.527728  510366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:04:46.528007  510366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:04:46.528382  510366 out.go:368] Setting JSON to false
	I0110 10:04:46.529253  510366 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10036,"bootTime":1768029451,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:04:46.529325  510366 start.go:143] virtualization:  
	I0110 10:04:46.534544  510366 out.go:179] * [no-preload-964204] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:04:46.537652  510366 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:04:46.537671  510366 notify.go:221] Checking for updates...
	I0110 10:04:46.543404  510366 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:04:46.546202  510366 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:46.549148  510366 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:04:46.551971  510366 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:04:46.554727  510366 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:04:46.558167  510366 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:46.558790  510366 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:04:46.593890  510366 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:04:46.594006  510366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:04:46.651080  510366 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:04:46.641909977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:04:46.651182  510366 docker.go:319] overlay module found
	I0110 10:04:46.654560  510366 out.go:179] * Using the docker driver based on existing profile
	I0110 10:04:46.657574  510366 start.go:309] selected driver: docker
	I0110 10:04:46.657597  510366 start.go:928] validating driver "docker" against &{Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:04:46.657700  510366 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:04:46.658468  510366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:04:46.713175  510366 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:04:46.704568936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:04:46.713501  510366 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:04:46.713539  510366 cni.go:84] Creating CNI manager for ""
	I0110 10:04:46.713599  510366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:04:46.713644  510366 start.go:353] cluster config:
	{Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:04:46.718669  510366 out.go:179] * Starting "no-preload-964204" primary control-plane node in "no-preload-964204" cluster
	I0110 10:04:46.721526  510366 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:04:46.724371  510366 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:04:46.727112  510366 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:04:46.727180  510366 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:04:46.727248  510366 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json ...
	I0110 10:04:46.727524  510366 cache.go:107] acquiring lock: {Name:mkaf98767e2a7d58e08cc2ca469eac45d26ab17d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727604  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 10:04:46.727612  510366 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.144µs
	I0110 10:04:46.727621  510366 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 10:04:46.727632  510366 cache.go:107] acquiring lock: {Name:mk20f45a028e063162f8cd4bcc9049083b517dce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727661  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 10:04:46.727666  510366 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 35.603µs
	I0110 10:04:46.727672  510366 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 10:04:46.727681  510366 cache.go:107] acquiring lock: {Name:mk49f61dae811454fbbf5c86caa9b028b9c6fc70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727707  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 10:04:46.727713  510366 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 32.812µs
	I0110 10:04:46.727718  510366 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 10:04:46.727739  510366 cache.go:107] acquiring lock: {Name:mk1d8ad3a0da43b5820d3ac9775158ff65f73409 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727767  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 10:04:46.727771  510366 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 34.471µs
	I0110 10:04:46.727777  510366 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 10:04:46.727785  510366 cache.go:107] acquiring lock: {Name:mk27d75a0d283ab8c320b03d40025ce2f8416bac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727810  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 10:04:46.727792  510366 cache.go:107] acquiring lock: {Name:mke106dc55e7252772391fff3ed3fce4c597722f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727831  510366 cache.go:107] acquiring lock: {Name:mk5e0c44af9753c2eb4284091ed19ea2384d8759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727861  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 10:04:46.727866  510366 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 36.432µs
	I0110 10:04:46.727874  510366 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 10:04:46.727870  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0110 10:04:46.727883  510366 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 98.996µs
	I0110 10:04:46.727888  510366 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 10:04:46.727821  510366 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 30.137µs
	I0110 10:04:46.727895  510366 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 10:04:46.727887  510366 cache.go:107] acquiring lock: {Name:mk025301e6f5fb7d9efce7266c9392491c803686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.727924  510366 cache.go:115] /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 10:04:46.727931  510366 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 51.947µs
	I0110 10:04:46.727939  510366 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 10:04:46.727946  510366 cache.go:87] Successfully saved all images to host disk.
	I0110 10:04:46.746589  510366 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:04:46.746610  510366 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:04:46.746632  510366 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:04:46.746664  510366 start.go:360] acquireMachinesLock for no-preload-964204: {Name:mk30268180d89419a4155580e5db2de74dfb3aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:04:46.746731  510366 start.go:364] duration metric: took 45.293µs to acquireMachinesLock for "no-preload-964204"
	I0110 10:04:46.746754  510366 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:04:46.746762  510366 fix.go:54] fixHost starting: 
	I0110 10:04:46.747033  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:46.763956  510366 fix.go:112] recreateIfNeeded on no-preload-964204: state=Stopped err=<nil>
	W0110 10:04:46.763996  510366 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 10:04:46.769157  510366 out.go:252] * Restarting existing docker container for "no-preload-964204" ...
	I0110 10:04:46.769281  510366 cli_runner.go:164] Run: docker start no-preload-964204
	I0110 10:04:47.027120  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:47.049362  510366 kic.go:430] container "no-preload-964204" state is running.
	I0110 10:04:47.049755  510366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:04:47.071531  510366 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/config.json ...
	I0110 10:04:47.071762  510366 machine.go:94] provisionDockerMachine start ...
	I0110 10:04:47.071865  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:47.096406  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:47.096778  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:47.096788  510366 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:04:47.097451  510366 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:04:50.260274  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-964204
	
	I0110 10:04:50.260301  510366 ubuntu.go:182] provisioning hostname "no-preload-964204"
	I0110 10:04:50.260427  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:50.277769  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:50.278087  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:50.278106  510366 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-964204 && echo "no-preload-964204" | sudo tee /etc/hostname
	I0110 10:04:50.434404  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-964204
	
	I0110 10:04:50.434482  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:50.453437  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:50.453754  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:50.453776  510366 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-964204' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-964204/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-964204' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:04:50.600785  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:04:50.600852  510366 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:04:50.600889  510366 ubuntu.go:190] setting up certificates
	I0110 10:04:50.600919  510366 provision.go:84] configureAuth start
	I0110 10:04:50.601017  510366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:04:50.618672  510366 provision.go:143] copyHostCerts
	I0110 10:04:50.618738  510366 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:04:50.618755  510366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:04:50.618830  510366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:04:50.618936  510366 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:04:50.618942  510366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:04:50.618970  510366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:04:50.619032  510366 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:04:50.619037  510366 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:04:50.619061  510366 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:04:50.619114  510366 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.no-preload-964204 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-964204]
	I0110 10:04:50.959539  510366 provision.go:177] copyRemoteCerts
	I0110 10:04:50.959659  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:04:50.959720  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:50.976344  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.080638  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:04:51.100247  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 10:04:51.119071  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:04:51.138156  510366 provision.go:87] duration metric: took 537.197129ms to configureAuth
	I0110 10:04:51.138186  510366 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:04:51.138389  510366 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:51.138498  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.156041  510366 main.go:144] libmachine: Using SSH client type: native
	I0110 10:04:51.156360  510366 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33434 <nil> <nil>}
	I0110 10:04:51.156381  510366 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:04:51.501324  510366 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:04:51.501346  510366 machine.go:97] duration metric: took 4.429567254s to provisionDockerMachine
	I0110 10:04:51.501359  510366 start.go:293] postStartSetup for "no-preload-964204" (driver="docker")
	I0110 10:04:51.501370  510366 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:04:51.501434  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:04:51.501495  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.523255  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.628748  510366 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:04:51.632025  510366 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:04:51.632051  510366 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:04:51.632061  510366 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:04:51.632120  510366 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:04:51.632205  510366 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:04:51.632317  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:04:51.639641  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:04:51.661771  510366 start.go:296] duration metric: took 160.397735ms for postStartSetup
	I0110 10:04:51.661849  510366 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:04:51.661905  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.680302  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.785455  510366 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:04:51.790049  510366 fix.go:56] duration metric: took 5.043280554s for fixHost
	I0110 10:04:51.790075  510366 start.go:83] releasing machines lock for "no-preload-964204", held for 5.043332361s
	I0110 10:04:51.790145  510366 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-964204
	I0110 10:04:51.811704  510366 ssh_runner.go:195] Run: cat /version.json
	I0110 10:04:51.811759  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.812015  510366 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:04:51.812076  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:51.829354  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:51.832763  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:52.034481  510366 ssh_runner.go:195] Run: systemctl --version
	I0110 10:04:52.041399  510366 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:04:52.079286  510366 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:04:52.083946  510366 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:04:52.084020  510366 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:04:52.092348  510366 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:04:52.092376  510366 start.go:496] detecting cgroup driver to use...
	I0110 10:04:52.092424  510366 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:04:52.092518  510366 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:04:52.108797  510366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:04:52.122254  510366 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:04:52.122373  510366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:04:52.138433  510366 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:04:52.151992  510366 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:04:52.269409  510366 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:04:52.379454  510366 docker.go:234] disabling docker service ...
	I0110 10:04:52.379562  510366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:04:52.394703  510366 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:04:52.410852  510366 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:04:52.543917  510366 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:04:52.664637  510366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:04:52.677406  510366 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:04:52.692019  510366 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:04:52.692167  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.701070  510366 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:04:52.701153  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.710183  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.719407  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.728620  510366 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:04:52.737045  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.746160  510366 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.754726  510366 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:04:52.764080  510366 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:04:52.771828  510366 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:04:52.779618  510366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:52.886124  510366 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:04:53.064101  510366 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:04:53.064181  510366 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:04:53.068675  510366 start.go:574] Will wait 60s for crictl version
	I0110 10:04:53.068752  510366 ssh_runner.go:195] Run: which crictl
	I0110 10:04:53.072675  510366 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:04:53.098381  510366 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:04:53.098467  510366 ssh_runner.go:195] Run: crio --version
	I0110 10:04:53.126520  510366 ssh_runner.go:195] Run: crio --version
	I0110 10:04:53.160912  510366 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:04:53.163975  510366 cli_runner.go:164] Run: docker network inspect no-preload-964204 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:04:53.184353  510366 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:04:53.188475  510366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:04:53.207466  510366 kubeadm.go:884] updating cluster {Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:04:53.207573  510366 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:04:53.207963  510366 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:04:53.256563  510366 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:04:53.256589  510366 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:04:53.256598  510366 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:04:53.256704  510366 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-964204 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:04:53.256788  510366 ssh_runner.go:195] Run: crio config
	I0110 10:04:53.309622  510366 cni.go:84] Creating CNI manager for ""
	I0110 10:04:53.309647  510366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:04:53.309669  510366 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:04:53.309693  510366 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-964204 NodeName:no-preload-964204 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:04:53.309823  510366 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-964204"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:04:53.309899  510366 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:04:53.317846  510366 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:04:53.317912  510366 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:04:53.325806  510366 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:04:53.338632  510366 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:04:53.350900  510366 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0110 10:04:53.364108  510366 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:04:53.367778  510366 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:04:53.377275  510366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:53.492989  510366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:04:53.510239  510366 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204 for IP: 192.168.76.2
	I0110 10:04:53.510265  510366 certs.go:195] generating shared ca certs ...
	I0110 10:04:53.510282  510366 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:53.510469  510366 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:04:53.510536  510366 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:04:53.510548  510366 certs.go:257] generating profile certs ...
	I0110 10:04:53.510654  510366 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.key
	I0110 10:04:53.510744  510366 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key.50e5be67
	I0110 10:04:53.510816  510366 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key
	I0110 10:04:53.510950  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:04:53.510995  510366 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:04:53.511008  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:04:53.511041  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:04:53.511084  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:04:53.511116  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:04:53.511176  510366 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:04:53.511817  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:04:53.536318  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:04:53.557995  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:04:53.578379  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:04:53.604045  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:04:53.622153  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:04:53.640078  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:04:53.659424  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:04:53.681106  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:04:53.701870  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:04:53.726025  510366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:04:53.746239  510366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:04:53.761174  510366 ssh_runner.go:195] Run: openssl version
	I0110 10:04:53.768890  510366 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.776597  510366 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:04:53.784277  510366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.788298  510366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.788393  510366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:04:53.829628  510366 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:04:53.837254  510366 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.844967  510366 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:04:53.853981  510366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.857972  510366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.858091  510366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:04:53.899466  510366 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:04:53.910409  510366 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.919978  510366 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:04:53.927692  510366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.932092  510366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.932193  510366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:04:53.976752  510366 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:04:53.984200  510366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:04:53.988012  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:04:54.030216  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:04:54.071502  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:04:54.112752  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:04:54.154467  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:04:54.210619  510366 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:04:54.278756  510366 kubeadm.go:401] StartCluster: {Name:no-preload-964204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-964204 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:04:54.278902  510366 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:04:54.278995  510366 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:04:54.352119  510366 cri.go:96] found id: "b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1"
	I0110 10:04:54.352190  510366 cri.go:96] found id: "146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b"
	I0110 10:04:54.352209  510366 cri.go:96] found id: "95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29"
	I0110 10:04:54.352229  510366 cri.go:96] found id: "c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c"
	I0110 10:04:54.352276  510366 cri.go:96] found id: ""
	I0110 10:04:54.352360  510366 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:04:54.364483  510366 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:04:54Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:04:54.364631  510366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:04:54.378270  510366 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:04:54.378347  510366 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:04:54.378435  510366 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:04:54.386013  510366 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:04:54.386506  510366 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-964204" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:54.386664  510366 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-964204" cluster setting kubeconfig missing "no-preload-964204" context setting]
	I0110 10:04:54.387018  510366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:54.388582  510366 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:04:54.399029  510366 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:04:54.399108  510366 kubeadm.go:602] duration metric: took 20.741385ms to restartPrimaryControlPlane
	I0110 10:04:54.399132  510366 kubeadm.go:403] duration metric: took 120.386073ms to StartCluster
	I0110 10:04:54.399177  510366 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:54.399267  510366 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:04:54.399896  510366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:04:54.400152  510366 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:04:54.400565  510366 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:04:54.400656  510366 addons.go:70] Setting storage-provisioner=true in profile "no-preload-964204"
	I0110 10:04:54.400679  510366 addons.go:239] Setting addon storage-provisioner=true in "no-preload-964204"
	W0110 10:04:54.400689  510366 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:04:54.400713  510366 addons.go:70] Setting dashboard=true in profile "no-preload-964204"
	I0110 10:04:54.400793  510366 addons.go:239] Setting addon dashboard=true in "no-preload-964204"
	W0110 10:04:54.400818  510366 addons.go:248] addon dashboard should already be in state true
	I0110 10:04:54.400972  510366 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:54.401726  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.400716  510366 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:54.400723  510366 addons.go:70] Setting default-storageclass=true in profile "no-preload-964204"
	I0110 10:04:54.402286  510366 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-964204"
	I0110 10:04:54.402502  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.402558  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.405249  510366 out.go:179] * Verifying Kubernetes components...
	I0110 10:04:54.400624  510366 config.go:182] Loaded profile config "no-preload-964204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:04:54.408708  510366 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:04:54.457557  510366 addons.go:239] Setting addon default-storageclass=true in "no-preload-964204"
	W0110 10:04:54.457579  510366 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:04:54.457603  510366 host.go:66] Checking if "no-preload-964204" exists ...
	I0110 10:04:54.458007  510366 cli_runner.go:164] Run: docker container inspect no-preload-964204 --format={{.State.Status}}
	I0110 10:04:54.458204  510366 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:04:54.467206  510366 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:04:54.470142  510366 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:04:54.470273  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:04:54.470286  510366 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:04:54.470367  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:54.476296  510366 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:04:54.476321  510366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:04:54.476393  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:54.500540  510366 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:04:54.500562  510366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:04:54.500626  510366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-964204
	I0110 10:04:54.532650  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:54.533138  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:54.550258  510366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33434 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/no-preload-964204/id_rsa Username:docker}
	I0110 10:04:54.805159  510366 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:04:54.831693  510366 node_ready.go:35] waiting up to 6m0s for node "no-preload-964204" to be "Ready" ...
	I0110 10:04:54.844792  510366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:04:54.873649  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:04:54.873669  510366 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:04:54.879986  510366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:04:54.928845  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:04:54.928870  510366 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:04:55.007307  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:04:55.007337  510366 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:04:55.058493  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:04:55.058518  510366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:04:55.071557  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:04:55.071581  510366 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:04:55.089300  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:04:55.089332  510366 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:04:55.106779  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:04:55.106801  510366 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:04:55.130887  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:04:55.130913  510366 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:04:55.153782  510366 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:04:55.153804  510366 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:04:55.170860  510366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:04:57.497642  510366 node_ready.go:49] node "no-preload-964204" is "Ready"
	I0110 10:04:57.497675  510366 node_ready.go:38] duration metric: took 2.665949206s for node "no-preload-964204" to be "Ready" ...
	I0110 10:04:57.497689  510366 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:04:57.497750  510366 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:04:57.764716  510366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.919883832s)
	I0110 10:04:59.234218  510366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.354195746s)
	I0110 10:04:59.234333  510366 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.063443393s)
	I0110 10:04:59.234526  510366 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.736758914s)
	I0110 10:04:59.234545  510366 api_server.go:72] duration metric: took 4.834338458s to wait for apiserver process to appear ...
	I0110 10:04:59.234552  510366 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:04:59.234583  510366 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:04:59.237524  510366 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-964204 addons enable metrics-server
	
	I0110 10:04:59.240630  510366 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I0110 10:04:59.243940  510366 addons.go:530] duration metric: took 4.843380822s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I0110 10:04:59.245232  510366 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:04:59.246407  510366 api_server.go:141] control plane version: v1.35.0
	I0110 10:04:59.246441  510366 api_server.go:131] duration metric: took 11.881924ms to wait for apiserver health ...
	I0110 10:04:59.246450  510366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:04:59.250658  510366 system_pods.go:59] 8 kube-system pods found
	I0110 10:04:59.250715  510366 system_pods.go:61] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:59.250727  510366 system_pods.go:61] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:04:59.250736  510366 system_pods.go:61] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:59.250744  510366 system_pods.go:61] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:04:59.250751  510366 system_pods.go:61] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:04:59.250756  510366 system_pods.go:61] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:59.250763  510366 system_pods.go:61] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:04:59.250768  510366 system_pods.go:61] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Running
	I0110 10:04:59.250775  510366 system_pods.go:74] duration metric: took 4.318685ms to wait for pod list to return data ...
	I0110 10:04:59.250783  510366 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:04:59.253556  510366 default_sa.go:45] found service account: "default"
	I0110 10:04:59.253578  510366 default_sa.go:55] duration metric: took 2.789607ms for default service account to be created ...
	I0110 10:04:59.253588  510366 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:04:59.257027  510366 system_pods.go:86] 8 kube-system pods found
	I0110 10:04:59.257107  510366 system_pods.go:89] "coredns-7d764666f9-nbrjs" [26b2eccf-72f4-4fee-bd27-95ab393ab006] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:04:59.257138  510366 system_pods.go:89] "etcd-no-preload-964204" [0466a1f7-5a61-4516-a394-9e671cb0fd86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:04:59.257174  510366 system_pods.go:89] "kindnet-fmp9h" [e91c85ce-4c93-4059-99c2-94f99d1adf02] Running
	I0110 10:04:59.257197  510366 system_pods.go:89] "kube-apiserver-no-preload-964204" [3c3ed06f-a02a-41f6-b884-61f575c33979] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:04:59.257221  510366 system_pods.go:89] "kube-controller-manager-no-preload-964204" [c3816078-65c5-491c-9198-9d54c097e217] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:04:59.257245  510366 system_pods.go:89] "kube-proxy-7f6q4" [02ce65ed-8383-4cd3-aae8-a5292c0b3ab1] Running
	I0110 10:04:59.257278  510366 system_pods.go:89] "kube-scheduler-no-preload-964204" [a5e9ae4f-a95a-4e42-805d-cc803cbeb877] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:04:59.257302  510366 system_pods.go:89] "storage-provisioner" [0a72c05f-1ea6-4b65-a567-cdea38d0054d] Running
	I0110 10:04:59.257327  510366 system_pods.go:126] duration metric: took 3.732561ms to wait for k8s-apps to be running ...
	I0110 10:04:59.257356  510366 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:04:59.257426  510366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:04:59.271328  510366 system_svc.go:56] duration metric: took 13.964593ms WaitForService to wait for kubelet
	I0110 10:04:59.271412  510366 kubeadm.go:587] duration metric: took 4.871186002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:04:59.271448  510366 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:04:59.274728  510366 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:04:59.274759  510366 node_conditions.go:123] node cpu capacity is 2
	I0110 10:04:59.274773  510366 node_conditions.go:105] duration metric: took 3.302302ms to run NodePressure ...
	I0110 10:04:59.274786  510366 start.go:242] waiting for startup goroutines ...
	I0110 10:04:59.274794  510366 start.go:247] waiting for cluster config update ...
	I0110 10:04:59.274805  510366 start.go:256] writing updated cluster config ...
	I0110 10:04:59.275121  510366 ssh_runner.go:195] Run: rm -f paused
	I0110 10:04:59.279437  510366 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:04:59.282918  510366 pod_ready.go:83] waiting for pod "coredns-7d764666f9-nbrjs" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 10:05:01.288998  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:03.788560  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:06.289928  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:08.788731  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:11.289744  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:13.788490  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:15.788721  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:18.289330  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:20.293229  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:22.789152  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:25.289050  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:27.788313  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:30.288701  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	W0110 10:05:32.788201  510366 pod_ready.go:104] pod "coredns-7d764666f9-nbrjs" is not "Ready", error: <nil>
	I0110 10:05:34.287707  510366 pod_ready.go:94] pod "coredns-7d764666f9-nbrjs" is "Ready"
	I0110 10:05:34.287739  510366 pod_ready.go:86] duration metric: took 35.004796798s for pod "coredns-7d764666f9-nbrjs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.290345  510366 pod_ready.go:83] waiting for pod "etcd-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.294279  510366 pod_ready.go:94] pod "etcd-no-preload-964204" is "Ready"
	I0110 10:05:34.294308  510366 pod_ready.go:86] duration metric: took 3.936058ms for pod "etcd-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.296592  510366 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.300913  510366 pod_ready.go:94] pod "kube-apiserver-no-preload-964204" is "Ready"
	I0110 10:05:34.300939  510366 pod_ready.go:86] duration metric: took 4.323616ms for pod "kube-apiserver-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.303216  510366 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.486370  510366 pod_ready.go:94] pod "kube-controller-manager-no-preload-964204" is "Ready"
	I0110 10:05:34.486397  510366 pod_ready.go:86] duration metric: took 183.154945ms for pod "kube-controller-manager-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:34.686518  510366 pod_ready.go:83] waiting for pod "kube-proxy-7f6q4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.087004  510366 pod_ready.go:94] pod "kube-proxy-7f6q4" is "Ready"
	I0110 10:05:35.087093  510366 pod_ready.go:86] duration metric: took 400.548456ms for pod "kube-proxy-7f6q4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.286387  510366 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.686453  510366 pod_ready.go:94] pod "kube-scheduler-no-preload-964204" is "Ready"
	I0110 10:05:35.686480  510366 pod_ready.go:86] duration metric: took 400.065554ms for pod "kube-scheduler-no-preload-964204" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:05:35.686494  510366 pod_ready.go:40] duration metric: took 36.40702222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:05:35.739678  510366 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:05:35.743307  510366 out.go:203] 
	W0110 10:05:35.746687  510366 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:05:35.749956  510366 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:05:35.753278  510366 out.go:179] * Done! kubectl is now configured to use "no-preload-964204" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:05:23 no-preload-964204 conmon[1661]: conmon 2d63e97a45900511ce73 <ninfo>: container 1663 exited with status 1
	Jan 10 10:05:23 no-preload-964204 crio[661]: time="2026-01-10T10:05:23.853520956Z" level=info msg="Removing container: f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52" id=8f90c038-9d29-45d0-9920-43ad33ed8182 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:05:23 no-preload-964204 crio[661]: time="2026-01-10T10:05:23.862583456Z" level=info msg="Error loading conmon cgroup of container f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52: cgroup deleted" id=8f90c038-9d29-45d0-9920-43ad33ed8182 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:05:23 no-preload-964204 crio[661]: time="2026-01-10T10:05:23.866087837Z" level=info msg="Removed container f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l/dashboard-metrics-scraper" id=8f90c038-9d29-45d0-9920-43ad33ed8182 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:05:28 no-preload-964204 conmon[1161]: conmon e5367860812c0fd9dbc4 <ninfo>: container 1170 exited with status 1
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.866991953Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7dc13d82-c9d3-48fe-92d2-71e0f754c775 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.868341057Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a8bf03da-9df8-46b6-9e88-aa4c2e4af4ff name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.869391557Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dc544324-1150-4379-8241-fd5472e14fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.869499759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.874418714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.87459388Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/513677cb66801c2627fc9495c678f0aa9416c2dc134933d155dc41312bbd526f/merged/etc/passwd: no such file or directory"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.874614623Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/513677cb66801c2627fc9495c678f0aa9416c2dc134933d155dc41312bbd526f/merged/etc/group: no such file or directory"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.874869428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.897910116Z" level=info msg="Created container cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f: kube-system/storage-provisioner/storage-provisioner" id=dc544324-1150-4379-8241-fd5472e14fb9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.901088044Z" level=info msg="Starting container: cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f" id=47fcc7e7-7dbf-44ce-b194-b7c7d8f1eae0 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:05:28 no-preload-964204 crio[661]: time="2026-01-10T10:05:28.908910527Z" level=info msg="Started container" PID=1675 containerID=cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f description=kube-system/storage-provisioner/storage-provisioner id=47fcc7e7-7dbf-44ce-b194-b7c7d8f1eae0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14b557a1126a5d54db45ef11d021726baa38ea7dcfbc3b82d323c99e3c1f91bc
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.651856712Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.651892831Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.656359571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.656394575Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.66078302Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.660816572Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.660838357Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.664850379Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:05:38 no-preload-964204 crio[661]: time="2026-01-10T10:05:38.664890864Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cd918a01d2e1b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago      Running             storage-provisioner         2                   14b557a1126a5       storage-provisioner                          kube-system
	2d63e97a45900       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   911cb839e2796       dashboard-metrics-scraper-867fb5f87b-f4z7l   kubernetes-dashboard
	5bba4f2fe6e9a       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago      Running             kubernetes-dashboard        0                   3e1ad88d4212f       kubernetes-dashboard-b84665fb8-6m4km         kubernetes-dashboard
	abc42cffac590       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   83cf949b77619       coredns-7d764666f9-nbrjs                     kube-system
	c7dd71b3888f7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   30a621d83c17c       busybox                                      default
	6d4368ac3242c       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   8d238b81170f1       kindnet-fmp9h                                kube-system
	fe92ff1c402a7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           54 seconds ago      Running             kube-proxy                  1                   812c93dcafcf1       kube-proxy-7f6q4                             kube-system
	e5367860812c0       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago      Exited              storage-provisioner         1                   14b557a1126a5       storage-provisioner                          kube-system
	b27a68f656d95       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           57 seconds ago      Running             kube-scheduler              1                   0709fe6d06ad3       kube-scheduler-no-preload-964204             kube-system
	146888a99c32f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           57 seconds ago      Running             kube-apiserver              1                   ed6f856e76c1f       kube-apiserver-no-preload-964204             kube-system
	95f695558eee3       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           57 seconds ago      Running             etcd                        1                   33838eeeb5c20       etcd-no-preload-964204                       kube-system
	c58341e383779       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           57 seconds ago      Running             kube-controller-manager     1                   377f5dde7f990       kube-controller-manager-no-preload-964204    kube-system
	
	
	==> coredns [abc42cffac590ed77549483d1a05755448e312a585d4204a268fc5e5f6a03e0a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51141 - 12533 "HINFO IN 5204147763035130131.4557860035770675533. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043648402s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-964204
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-964204
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=no-preload-964204
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_04_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-964204
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:05:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:03:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:05:28 +0000   Sat, 10 Jan 2026 10:04:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-964204
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                03ea2076-6e07-410a-8003-5ef363ddb41d
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-nbrjs                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-no-preload-964204                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         112s
	  kube-system                 kindnet-fmp9h                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-964204              250m (12%)    0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-no-preload-964204     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-7f6q4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-964204              100m (5%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-f4z7l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6m4km          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-964204 event: Registered Node no-preload-964204 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-964204 event: Registered Node no-preload-964204 in Controller
	
	
	==> dmesg <==
	[Jan10 09:31] overlayfs: idmapped layers are currently not supported
	[Jan10 09:35] overlayfs: idmapped layers are currently not supported
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [95f695558eee3b836eb9c525cc507ba61a2606e94eb5c3f56adb26321cc21e29] <==
	{"level":"info","ts":"2026-01-10T10:04:54.614983Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:04:54.615032Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:04:54.615245Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:04:54.615256Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:04:54.616057Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:04:54.616109Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:04:54.616180Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:04:54.656974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:04:54.657058Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:04:54.657118Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:04:54.657132Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:04:54.657147Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.658163Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.658187Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:04:54.658232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.658244Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:04:54.661723Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-964204 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:04:54.661767Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:04:54.662691Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:04:54.682520Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:04:54.685801Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:04:54.686866Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:04:54.691790Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:04:54.692917Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:04:54.692985Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:05:52 up  2:48,  0 user,  load average: 1.29, 1.47, 1.86
	Linux no-preload-964204 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6d4368ac3242cdcba3ee7fb78eb2026a6111050fe391df24e78edf0b58cf778f] <==
	I0110 10:04:58.452395       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:04:58.473071       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:04:58.473240       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:04:58.473253       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:04:58.473268       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:04:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:04:58.647328       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:04:58.647398       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:04:58.647432       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:04:58.649339       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:05:28.647685       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 10:05:28.649780       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:05:28.649856       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:05:28.649877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0110 10:05:29.949547       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:05:29.949585       1 metrics.go:72] Registering metrics
	I0110 10:05:29.949636       1 controller.go:711] "Syncing nftables rules"
	I0110 10:05:38.646935       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:05:38.646975       1 main.go:301] handling current node
	I0110 10:05:48.656564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:05:48.656667       1 main.go:301] handling current node
	
	
	==> kube-apiserver [146888a99c32f1421edf0f2758f99439bfb9a9b52b71842262af693d53517c9b] <==
	I0110 10:04:57.604678       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 10:04:57.604705       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 10:04:57.605271       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 10:04:57.605975       1 aggregator.go:187] initial CRD sync complete...
	I0110 10:04:57.605990       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 10:04:57.605996       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:04:57.606001       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:04:57.606160       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:57.606187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 10:04:57.632660       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 10:04:57.634143       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 10:04:57.660398       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:04:57.672995       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:04:57.674302       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:04:57.852784       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:04:58.339968       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:04:58.814962       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:04:58.934369       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:04:58.974750       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:04:58.989781       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:04:59.113179       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.185.78"}
	I0110 10:04:59.133429       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.196.59"}
	I0110 10:05:01.069158       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:05:01.169135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:05:01.268660       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c58341e383779a703a569adcc9010c3f6caf2719864eabf706b906edf6cb526c] <==
	I0110 10:05:00.677124       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-964204"
	I0110 10:05:00.677211       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 10:05:00.685000       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.686153       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.686796       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687582       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687691       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687843       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.687937       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688288       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688406       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688465       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688584       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.688952       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.689089       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.689485       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.689635       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.691090       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.692644       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.724247       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:05:00.750617       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.783649       1 shared_informer.go:377] "Caches are synced"
	I0110 10:05:00.783754       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:05:00.783788       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:05:00.824775       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [fe92ff1c402a775ab835548cd8b9b6ed7a60eea52715b689a2348e008a515c33] <==
	I0110 10:04:58.709796       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:04:58.921909       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:04:59.036870       1 shared_informer.go:377] "Caches are synced"
	I0110 10:04:59.036913       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:04:59.036981       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:04:59.082735       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:04:59.082895       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:04:59.088323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:04:59.088912       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:04:59.089148       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:04:59.090407       1 config.go:200] "Starting service config controller"
	I0110 10:04:59.090484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:04:59.090538       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:04:59.090586       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:04:59.090643       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:04:59.090677       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:04:59.091498       1 config.go:309] "Starting node config controller"
	I0110 10:04:59.091556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:04:59.091586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:04:59.190621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:04:59.190691       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 10:04:59.190940       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b27a68f656d959fe8dd95b31847ae1379016e414f61244c53a75e06cd9529ef1] <==
	I0110 10:04:55.779116       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:04:57.490851       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:04:57.490879       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:04:57.490888       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:04:57.490896       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:04:57.592082       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:04:57.592126       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:04:57.597066       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:04:57.597102       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:04:57.597284       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:04:57.597364       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:04:57.698087       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:05:11 no-preload-964204 kubelet[782]: E0110 10:05:11.821052     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:11 no-preload-964204 kubelet[782]: I0110 10:05:11.821072     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:11 no-preload-964204 kubelet[782]: E0110 10:05:11.821222     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:12 no-preload-964204 kubelet[782]: E0110 10:05:12.825222     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:12 no-preload-964204 kubelet[782]: I0110 10:05:12.825724     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:12 no-preload-964204 kubelet[782]: E0110 10:05:12.825965     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: E0110 10:05:13.827694     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: I0110 10:05:13.827740     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: E0110 10:05:13.827908     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:13 no-preload-964204 kubelet[782]: E0110 10:05:13.943908     782 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-964204" containerName="kube-apiserver"
	Jan 10 10:05:14 no-preload-964204 kubelet[782]: E0110 10:05:14.830218     782 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-964204" containerName="kube-apiserver"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: E0110 10:05:23.722062     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: I0110 10:05:23.722100     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: I0110 10:05:23.851856     782 scope.go:122] "RemoveContainer" containerID="f33b055718aac66d7797bbeba1f8e8feb800bb327ac3c135f0680f40d8921f52"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: E0110 10:05:23.852152     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: I0110 10:05:23.852182     782 scope.go:122] "RemoveContainer" containerID="2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	Jan 10 10:05:23 no-preload-964204 kubelet[782]: E0110 10:05:23.852342     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:28 no-preload-964204 kubelet[782]: I0110 10:05:28.866365     782 scope.go:122] "RemoveContainer" containerID="e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda"
	Jan 10 10:05:32 no-preload-964204 kubelet[782]: E0110 10:05:32.471874     782 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" containerName="dashboard-metrics-scraper"
	Jan 10 10:05:32 no-preload-964204 kubelet[782]: I0110 10:05:32.471926     782 scope.go:122] "RemoveContainer" containerID="2d63e97a45900511ce7398bafb57d28bb25cc046f89d1326f20113a76e6d08df"
	Jan 10 10:05:32 no-preload-964204 kubelet[782]: E0110 10:05:32.472087     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-f4z7l_kubernetes-dashboard(8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-f4z7l" podUID="8cc9bf3f-4f8d-460e-9fa2-faf0fbbfb6b5"
	Jan 10 10:05:34 no-preload-964204 kubelet[782]: E0110 10:05:34.077079     782 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-nbrjs" containerName="coredns"
	Jan 10 10:05:47 no-preload-964204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:05:48 no-preload-964204 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:05:48 no-preload-964204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [5bba4f2fe6e9a36f29cd4910fddfe6aff4d4e5cb154e4bdf8fb68ba0e7ea0c95] <==
	2026/01/10 10:05:06 Using namespace: kubernetes-dashboard
	2026/01/10 10:05:06 Using in-cluster config to connect to apiserver
	2026/01/10 10:05:06 Using secret token for csrf signing
	2026/01/10 10:05:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:05:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:05:06 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 10:05:06 Generating JWE encryption key
	2026/01/10 10:05:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:05:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:05:06 Initializing JWE encryption key from synchronized object
	2026/01/10 10:05:06 Creating in-cluster Sidecar client
	2026/01/10 10:05:06 Serving insecurely on HTTP port: 9090
	2026/01/10 10:05:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:05:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:05:06 Starting overwatch
	
	
	==> storage-provisioner [cd918a01d2e1bcb19024c1f9f200929f303e3b2817cf621882e5d0aacd0cea8f] <==
	I0110 10:05:28.923638       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:05:28.939580       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:05:28.939875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:05:28.942085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:32.397641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:36.657848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:40.256043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:43.309832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:46.332479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:46.338306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:05:46.338518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:05:46.338709       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-964204_1e5d4e18-f450-45ed-91c1-eb0acc5f47da!
	I0110 10:05:46.339214       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d8770a8-2c32-4636-b869-a554550e1ab6", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-964204_1e5d4e18-f450-45ed-91c1-eb0acc5f47da became leader
	W0110 10:05:46.342575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:46.350215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:05:46.439727       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-964204_1e5d4e18-f450-45ed-91c1-eb0acc5f47da!
	W0110 10:05:48.353238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:48.359614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:50.363159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:50.369892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:52.373464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:05:52.381999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e5367860812c0fd9dbc45503fe4cb48fee1dbd289d6727499208c2235c12dfda] <==
	I0110 10:04:58.290456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:05:28.291987       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-964204 -n no-preload-964204
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-964204 -n no-preload-964204: exit status 2 (398.166343ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-964204 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (285.137225ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:06:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-219333 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-219333 describe deploy/metrics-server -n kube-system: exit status 1 (81.82802ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-219333 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-219333
helpers_test.go:244: (dbg) docker inspect embed-certs-219333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51",
	        "Created": "2026-01-10T10:06:01.259250049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 514875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:06:01.322389266Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/hostname",
	        "HostsPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/hosts",
	        "LogPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51-json.log",
	        "Name": "/embed-certs-219333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-219333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-219333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51",
	                "LowerDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/merged",
	                "UpperDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/diff",
	                "WorkDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-219333",
	                "Source": "/var/lib/docker/volumes/embed-certs-219333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-219333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-219333",
	                "name.minikube.sigs.k8s.io": "embed-certs-219333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f52fcf733460270fab8ed5e881f6189acef449f7a03613e5c760bbe7fcf9168",
	            "SandboxKey": "/var/run/docker/netns/9f52fcf73346",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-219333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:eb:31:b2:0f:e3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d1e980d25c729b4e5350b1ccfb2f436b31893785314b40506467e9431269ca0",
	                    "EndpointID": "f0a1c518c3992eac341f4a33d2ec106fa1695b5d3ef01e3ca934557c77d3f562",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-219333",
	                        "11d72dc06eff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333
E0110 10:06:53.573777  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-219333 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-219333 logs -n 25: (1.481660215s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-525619                                                                                                                                                                                                                        │ cert-options-525619          │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:00 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:00 UTC │ 10 Jan 26 10:01 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-729486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │                     │
	│ stop    │ -p old-k8s-version-729486 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:01 UTC │ 10 Jan 26 10:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                                                                                                  │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                                                                                               │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:06:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:06:30.438492  517877 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:06:30.438639  517877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:06:30.438653  517877 out.go:374] Setting ErrFile to fd 2...
	I0110 10:06:30.438659  517877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:06:30.438932  517877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:06:30.439391  517877 out.go:368] Setting JSON to false
	I0110 10:06:30.440329  517877 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10140,"bootTime":1768029451,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:06:30.440410  517877 start.go:143] virtualization:  
	I0110 10:06:30.446522  517877 out.go:179] * [default-k8s-diff-port-820203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:06:30.449784  517877 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:06:30.449876  517877 notify.go:221] Checking for updates...
	I0110 10:06:30.455716  517877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:06:30.458856  517877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:06:30.462156  517877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:06:30.465149  517877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:06:30.468585  517877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:06:30.472150  517877 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:06:30.472295  517877 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:06:30.494672  517877 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:06:30.494788  517877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:06:30.554452  517877 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:06:30.545180373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:06:30.554555  517877 docker.go:319] overlay module found
	I0110 10:06:30.557747  517877 out.go:179] * Using the docker driver based on user configuration
	I0110 10:06:30.560764  517877 start.go:309] selected driver: docker
	I0110 10:06:30.560785  517877 start.go:928] validating driver "docker" against <nil>
	I0110 10:06:30.560800  517877 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:06:30.561509  517877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:06:30.616295  517877 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:06:30.606676287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:06:30.616464  517877 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:06:30.616753  517877 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:06:30.619710  517877 out.go:179] * Using Docker driver with root privileges
	I0110 10:06:30.622590  517877 cni.go:84] Creating CNI manager for ""
	I0110 10:06:30.622656  517877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:06:30.622671  517877 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:06:30.622750  517877 start.go:353] cluster config:
	{Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:06:30.625785  517877 out.go:179] * Starting "default-k8s-diff-port-820203" primary control-plane node in "default-k8s-diff-port-820203" cluster
	I0110 10:06:30.628656  517877 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:06:30.631629  517877 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:06:30.634534  517877 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:06:30.634561  517877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:06:30.634576  517877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:06:30.634586  517877 cache.go:65] Caching tarball of preloaded images
	I0110 10:06:30.634661  517877 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:06:30.634671  517877 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:06:30.634780  517877 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json ...
	I0110 10:06:30.634800  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json: {Name:mk373a9e9181adcc160a897cb32ca87fb0563b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:30.656765  517877 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:06:30.656843  517877 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:06:30.656866  517877 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:06:30.656924  517877 start.go:360] acquireMachinesLock for default-k8s-diff-port-820203: {Name:mkaca248efde78a9e4798a5020ca02bdd83351f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:06:30.657060  517877 start.go:364] duration metric: took 102.393µs to acquireMachinesLock for "default-k8s-diff-port-820203"
	I0110 10:06:30.657094  517877 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:06:30.657176  517877 start.go:125] createHost starting for "" (driver="docker")
	I0110 10:06:27.851533  514451 addons.go:530] duration metric: took 2.288531944s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0110 10:06:29.268294  514451 node_ready.go:57] node "embed-certs-219333" has "Ready":"False" status (will retry)
	W0110 10:06:31.269211  514451 node_ready.go:57] node "embed-certs-219333" has "Ready":"False" status (will retry)
	I0110 10:06:30.660876  517877 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:06:30.661113  517877 start.go:159] libmachine.API.Create for "default-k8s-diff-port-820203" (driver="docker")
	I0110 10:06:30.661153  517877 client.go:173] LocalClient.Create starting
	I0110 10:06:30.661220  517877 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:06:30.661257  517877 main.go:144] libmachine: Decoding PEM data...
	I0110 10:06:30.661280  517877 main.go:144] libmachine: Parsing certificate...
	I0110 10:06:30.661334  517877 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:06:30.661359  517877 main.go:144] libmachine: Decoding PEM data...
	I0110 10:06:30.661371  517877 main.go:144] libmachine: Parsing certificate...
	I0110 10:06:30.661751  517877 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-820203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:06:30.681632  517877 cli_runner.go:211] docker network inspect default-k8s-diff-port-820203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:06:30.681766  517877 network_create.go:284] running [docker network inspect default-k8s-diff-port-820203] to gather additional debugging logs...
	I0110 10:06:30.681791  517877 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-820203
	W0110 10:06:30.713539  517877 cli_runner.go:211] docker network inspect default-k8s-diff-port-820203 returned with exit code 1
	I0110 10:06:30.713574  517877 network_create.go:287] error running [docker network inspect default-k8s-diff-port-820203]: docker network inspect default-k8s-diff-port-820203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-820203 not found
	I0110 10:06:30.713589  517877 network_create.go:289] output of [docker network inspect default-k8s-diff-port-820203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-820203 not found
	
	** /stderr **
	I0110 10:06:30.713690  517877 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:06:30.731172  517877 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:06:30.731665  517877 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:06:30.731916  517877 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:06:30.732199  517877 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8d1e980d25c7 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:aa:73:a7:a8:e3:43} reservation:<nil>}
	I0110 10:06:30.732765  517877 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4e900}
	I0110 10:06:30.732792  517877 network_create.go:124] attempt to create docker network default-k8s-diff-port-820203 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 10:06:30.732851  517877 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-820203 default-k8s-diff-port-820203
	I0110 10:06:30.794394  517877 network_create.go:108] docker network default-k8s-diff-port-820203 192.168.85.0/24 created
	I0110 10:06:30.794438  517877 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-820203" container
	I0110 10:06:30.794510  517877 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:06:30.811756  517877 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-820203 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-820203 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:06:30.829627  517877 oci.go:103] Successfully created a docker volume default-k8s-diff-port-820203
	I0110 10:06:30.829722  517877 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-820203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-820203 --entrypoint /usr/bin/test -v default-k8s-diff-port-820203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:06:31.362449  517877 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-820203
	I0110 10:06:31.362527  517877 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:06:31.362537  517877 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:06:31.362637  517877 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-820203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:06:35.231408  517877 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-820203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.868729666s)
	I0110 10:06:35.231442  517877 kic.go:203] duration metric: took 3.868900926s to extract preloaded images to volume ...
	W0110 10:06:35.231605  517877 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:06:35.231723  517877 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:06:35.292571  517877 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-820203 --name default-k8s-diff-port-820203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-820203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-820203 --network default-k8s-diff-port-820203 --ip 192.168.85.2 --volume default-k8s-diff-port-820203:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	W0110 10:06:33.269592  514451 node_ready.go:57] node "embed-certs-219333" has "Ready":"False" status (will retry)
	W0110 10:06:35.769145  514451 node_ready.go:57] node "embed-certs-219333" has "Ready":"False" status (will retry)
	I0110 10:06:35.612148  517877 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Running}}
	I0110 10:06:35.638261  517877 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:06:35.659528  517877 cli_runner.go:164] Run: docker exec default-k8s-diff-port-820203 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:06:35.707733  517877 oci.go:144] the created container "default-k8s-diff-port-820203" has a running status.
	I0110 10:06:35.707762  517877 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa...
	I0110 10:06:36.293389  517877 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:06:36.326480  517877 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:06:36.357871  517877 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:06:36.357891  517877 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-820203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:06:36.416243  517877 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:06:36.437468  517877 machine.go:94] provisionDockerMachine start ...
	I0110 10:06:36.437730  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:36.467485  517877 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:36.469149  517877 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0110 10:06:36.469169  517877 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:06:36.672078  517877 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-820203
	
	I0110 10:06:36.672126  517877 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-820203"
	I0110 10:06:36.672213  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:36.699797  517877 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:36.700114  517877 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0110 10:06:36.700133  517877 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-820203 && echo "default-k8s-diff-port-820203" | sudo tee /etc/hostname
	I0110 10:06:36.873264  517877 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-820203
	
	I0110 10:06:36.873345  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:36.893424  517877 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:36.893731  517877 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0110 10:06:36.893754  517877 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-820203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-820203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-820203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:06:37.056898  517877 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:06:37.056940  517877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:06:37.056963  517877 ubuntu.go:190] setting up certificates
	I0110 10:06:37.056975  517877 provision.go:84] configureAuth start
	I0110 10:06:37.057042  517877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:06:37.075304  517877 provision.go:143] copyHostCerts
	I0110 10:06:37.075376  517877 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:06:37.075385  517877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:06:37.075464  517877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:06:37.075635  517877 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:06:37.075651  517877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:06:37.075693  517877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:06:37.075759  517877 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:06:37.075765  517877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:06:37.075790  517877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:06:37.075835  517877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-820203 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-820203 localhost minikube]
	I0110 10:06:37.126464  517877 provision.go:177] copyRemoteCerts
	I0110 10:06:37.126525  517877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:06:37.126563  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:37.143488  517877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:06:37.247973  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:06:37.265492  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 10:06:37.285919  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 10:06:37.305167  517877 provision.go:87] duration metric: took 248.16376ms to configureAuth
	I0110 10:06:37.305193  517877 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:06:37.305384  517877 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:06:37.305478  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:37.327332  517877 main.go:144] libmachine: Using SSH client type: native
	I0110 10:06:37.327635  517877 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33444 <nil> <nil>}
	I0110 10:06:37.327649  517877 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:06:37.645893  517877 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:06:37.645919  517877 machine.go:97] duration metric: took 1.208430596s to provisionDockerMachine
	I0110 10:06:37.645930  517877 client.go:176] duration metric: took 6.984767313s to LocalClient.Create
	I0110 10:06:37.645945  517877 start.go:167] duration metric: took 6.984832955s to libmachine.API.Create "default-k8s-diff-port-820203"
	I0110 10:06:37.645953  517877 start.go:293] postStartSetup for "default-k8s-diff-port-820203" (driver="docker")
	I0110 10:06:37.645964  517877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:06:37.646045  517877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:06:37.646091  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:37.663317  517877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:06:37.771757  517877 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:06:37.775364  517877 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:06:37.775434  517877 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:06:37.775460  517877 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:06:37.775538  517877 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:06:37.775638  517877 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:06:37.775744  517877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:06:37.783045  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:06:37.802346  517877 start.go:296] duration metric: took 156.3791ms for postStartSetup
	I0110 10:06:37.802709  517877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:06:37.818677  517877 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json ...
	I0110 10:06:37.818956  517877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:06:37.819022  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:37.835614  517877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:06:37.938138  517877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:06:37.944540  517877 start.go:128] duration metric: took 7.287349957s to createHost
	I0110 10:06:37.944573  517877 start.go:83] releasing machines lock for "default-k8s-diff-port-820203", held for 7.287499703s
	I0110 10:06:37.944644  517877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:06:37.964367  517877 ssh_runner.go:195] Run: cat /version.json
	I0110 10:06:37.964441  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:37.964602  517877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:06:37.964659  517877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:06:37.987358  517877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:06:37.998099  517877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33444 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:06:38.092909  517877 ssh_runner.go:195] Run: systemctl --version
	I0110 10:06:38.196251  517877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:06:38.235704  517877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:06:38.240057  517877 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:06:38.240173  517877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:06:38.270850  517877 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:06:38.270911  517877 start.go:496] detecting cgroup driver to use...
	I0110 10:06:38.270967  517877 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:06:38.271059  517877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:06:38.288419  517877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:06:38.302006  517877 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:06:38.302109  517877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:06:38.320394  517877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:06:38.339895  517877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:06:38.456481  517877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:06:38.581410  517877 docker.go:234] disabling docker service ...
	I0110 10:06:38.581478  517877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:06:38.605141  517877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:06:38.625071  517877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:06:38.739506  517877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:06:38.855111  517877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:06:38.869098  517877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:06:38.884691  517877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:06:38.884807  517877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.893586  517877 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:06:38.893707  517877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.903637  517877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.913949  517877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.923495  517877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:06:38.932311  517877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.942069  517877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.955460  517877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:06:38.964427  517877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:06:38.972608  517877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:06:38.980110  517877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:06:39.099075  517877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:06:39.269362  517877 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:06:39.269435  517877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:06:39.273312  517877 start.go:574] Will wait 60s for crictl version
	I0110 10:06:39.273420  517877 ssh_runner.go:195] Run: which crictl
	I0110 10:06:39.277036  517877 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:06:39.305401  517877 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:06:39.305482  517877 ssh_runner.go:195] Run: crio --version
	I0110 10:06:39.336793  517877 ssh_runner.go:195] Run: crio --version
	I0110 10:06:39.366411  517877 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:06:39.369323  517877 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-820203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:06:39.386192  517877 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 10:06:39.390033  517877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:06:39.399811  517877 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:06:39.399929  517877 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:06:39.399985  517877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:06:39.444205  517877 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:06:39.444231  517877 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:06:39.444287  517877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:06:39.471500  517877 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:06:39.471525  517877 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:06:39.471534  517877 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 10:06:39.471620  517877 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-820203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:06:39.471710  517877 ssh_runner.go:195] Run: crio config
	I0110 10:06:39.531300  517877 cni.go:84] Creating CNI manager for ""
	I0110 10:06:39.531325  517877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:06:39.531343  517877 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:06:39.531368  517877 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-820203 NodeName:default-k8s-diff-port-820203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:06:39.531498  517877 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-820203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:06:39.531574  517877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:06:39.539499  517877 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:06:39.539596  517877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:06:39.546981  517877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 10:06:39.560140  517877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:06:39.573192  517877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0110 10:06:39.585808  517877 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:06:39.589442  517877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:06:39.598947  517877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:06:39.730526  517877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:06:39.746467  517877 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203 for IP: 192.168.85.2
	I0110 10:06:39.746491  517877 certs.go:195] generating shared ca certs ...
	I0110 10:06:39.746507  517877 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:39.746697  517877 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:06:39.746768  517877 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:06:39.746783  517877 certs.go:257] generating profile certs ...
	I0110 10:06:39.746854  517877 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.key
	I0110 10:06:39.746893  517877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.crt with IP's: []
	I0110 10:06:40.176674  517877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.crt ...
	I0110 10:06:40.176707  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.crt: {Name:mk05abf09876dfa5109af617e0096c244955ce8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:40.176981  517877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.key ...
	I0110 10:06:40.176999  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.key: {Name:mk9d8a319e5984bdaea8875fb73bcd857620fc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:40.177156  517877 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key.15c00bf5
	I0110 10:06:40.177186  517877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt.15c00bf5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 10:06:40.298739  517877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt.15c00bf5 ...
	I0110 10:06:40.298769  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt.15c00bf5: {Name:mka442b7b689dc27789fe7bd2966da887b8bd8bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:40.298919  517877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key.15c00bf5 ...
	I0110 10:06:40.298935  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key.15c00bf5: {Name:mkb254ca9e06603811b1a7bef7ee87d0c7cb6902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:40.299016  517877 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt.15c00bf5 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt
	I0110 10:06:40.299128  517877 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key.15c00bf5 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key
	I0110 10:06:40.299202  517877 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key
	I0110 10:06:40.299218  517877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.crt with IP's: []
	W0110 10:06:38.268209  514451 node_ready.go:57] node "embed-certs-219333" has "Ready":"False" status (will retry)
	I0110 10:06:40.268663  514451 node_ready.go:49] node "embed-certs-219333" is "Ready"
	I0110 10:06:40.268690  514451 node_ready.go:38] duration metric: took 13.003216966s for node "embed-certs-219333" to be "Ready" ...
	I0110 10:06:40.268704  514451 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:06:40.268765  514451 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:06:40.285979  514451 api_server.go:72] duration metric: took 14.723265943s to wait for apiserver process to appear ...
	I0110 10:06:40.286002  514451 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:06:40.286029  514451 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:06:40.294766  514451 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:06:40.295994  514451 api_server.go:141] control plane version: v1.35.0
	I0110 10:06:40.296015  514451 api_server.go:131] duration metric: took 10.006134ms to wait for apiserver health ...
	I0110 10:06:40.296025  514451 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:06:40.301254  514451 system_pods.go:59] 8 kube-system pods found
	I0110 10:06:40.301283  514451 system_pods.go:61] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:06:40.301293  514451 system_pods.go:61] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:06:40.301301  514451 system_pods.go:61] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:06:40.301306  514451 system_pods.go:61] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running
	I0110 10:06:40.301311  514451 system_pods.go:61] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running
	I0110 10:06:40.301315  514451 system_pods.go:61] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:06:40.301319  514451 system_pods.go:61] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running
	I0110 10:06:40.301325  514451 system_pods.go:61] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:06:40.301330  514451 system_pods.go:74] duration metric: took 5.299021ms to wait for pod list to return data ...
	I0110 10:06:40.301338  514451 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:06:40.313201  514451 default_sa.go:45] found service account: "default"
	I0110 10:06:40.313227  514451 default_sa.go:55] duration metric: took 11.879618ms for default service account to be created ...
	I0110 10:06:40.313238  514451 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:06:40.325894  514451 system_pods.go:86] 8 kube-system pods found
	I0110 10:06:40.325923  514451 system_pods.go:89] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:06:40.325932  514451 system_pods.go:89] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:06:40.325939  514451 system_pods.go:89] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:06:40.325945  514451 system_pods.go:89] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running
	I0110 10:06:40.325950  514451 system_pods.go:89] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running
	I0110 10:06:40.325955  514451 system_pods.go:89] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:06:40.325959  514451 system_pods.go:89] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running
	I0110 10:06:40.325965  514451 system_pods.go:89] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:06:40.325993  514451 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 10:06:40.566015  514451 system_pods.go:86] 8 kube-system pods found
	I0110 10:06:40.566047  514451 system_pods.go:89] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:06:40.566056  514451 system_pods.go:89] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:06:40.566062  514451 system_pods.go:89] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:06:40.566068  514451 system_pods.go:89] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running
	I0110 10:06:40.566073  514451 system_pods.go:89] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running
	I0110 10:06:40.566078  514451 system_pods.go:89] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:06:40.566084  514451 system_pods.go:89] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running
	I0110 10:06:40.566096  514451 system_pods.go:89] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:06:40.829355  514451 system_pods.go:86] 8 kube-system pods found
	I0110 10:06:40.829438  514451 system_pods.go:89] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:06:40.829460  514451 system_pods.go:89] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running
	I0110 10:06:40.829497  514451 system_pods.go:89] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:06:40.829524  514451 system_pods.go:89] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running
	I0110 10:06:40.829548  514451 system_pods.go:89] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running
	I0110 10:06:40.829572  514451 system_pods.go:89] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:06:40.829606  514451 system_pods.go:89] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running
	I0110 10:06:40.829636  514451 system_pods.go:89] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:06:41.182258  514451 system_pods.go:86] 8 kube-system pods found
	I0110 10:06:41.182291  514451 system_pods.go:89] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:06:41.182298  514451 system_pods.go:89] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running
	I0110 10:06:41.182304  514451 system_pods.go:89] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:06:41.182309  514451 system_pods.go:89] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running
	I0110 10:06:41.182315  514451 system_pods.go:89] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running
	I0110 10:06:41.182319  514451 system_pods.go:89] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:06:41.182324  514451 system_pods.go:89] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running
	I0110 10:06:41.182330  514451 system_pods.go:89] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:06:40.754264  517877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.crt ...
	I0110 10:06:40.754297  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.crt: {Name:mk2bd001164a2598dca265278e733a8f60cbb71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:40.754549  517877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key ...
	I0110 10:06:40.754568  517877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key: {Name:mkba098852e4213e3d2f9173dad534f949f5d678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:06:40.754808  517877 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:06:40.754876  517877 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:06:40.754892  517877 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:06:40.754927  517877 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:06:40.754974  517877 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:06:40.755009  517877 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:06:40.755087  517877 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:06:40.755717  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:06:40.775006  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:06:40.794211  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:06:40.814844  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:06:40.834083  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 10:06:40.851736  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:06:40.870145  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:06:40.889432  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:06:40.907073  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:06:40.925627  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:06:40.943567  517877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:06:40.961161  517877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:06:40.973827  517877 ssh_runner.go:195] Run: openssl version
	I0110 10:06:40.980635  517877 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:06:40.988040  517877 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:06:40.995584  517877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:06:40.999045  517877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:06:40.999135  517877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:06:41.043023  517877 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:06:41.050806  517877 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:06:41.058301  517877 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:41.066141  517877 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:06:41.073678  517877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:41.077455  517877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:41.077526  517877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:06:41.118365  517877 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:06:41.125794  517877 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:06:41.133620  517877 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:06:41.141204  517877 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:06:41.148853  517877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:06:41.152834  517877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:06:41.152944  517877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:06:41.197566  517877 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:06:41.205304  517877 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:06:41.212850  517877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:06:41.216602  517877 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:06:41.216655  517877 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:06:41.216734  517877 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:06:41.216800  517877 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:06:41.244278  517877 cri.go:96] found id: ""
	I0110 10:06:41.244432  517877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:06:41.252594  517877 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:06:41.260691  517877 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:06:41.260808  517877 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:06:41.268796  517877 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:06:41.268858  517877 kubeadm.go:158] found existing configuration files:
	
	I0110 10:06:41.268917  517877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0110 10:06:41.276580  517877 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:06:41.276666  517877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:06:41.283914  517877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0110 10:06:41.291744  517877 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:06:41.291864  517877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:06:41.299307  517877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0110 10:06:41.307468  517877 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:06:41.307566  517877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:06:41.315022  517877 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0110 10:06:41.323062  517877 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:06:41.323147  517877 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:06:41.330812  517877 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:06:41.371271  517877 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:06:41.372697  517877 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:06:41.486874  517877 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:06:41.486994  517877 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:06:41.487075  517877 kubeadm.go:319] OS: Linux
	I0110 10:06:41.487154  517877 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:06:41.487236  517877 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:06:41.487313  517877 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:06:41.487392  517877 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:06:41.487469  517877 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:06:41.487549  517877 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:06:41.487621  517877 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:06:41.487701  517877 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:06:41.487774  517877 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:06:41.577885  517877 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:06:41.578054  517877 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:06:41.578176  517877 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:06:41.589533  517877 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:06:41.793308  514451 system_pods.go:86] 8 kube-system pods found
	I0110 10:06:41.793346  514451 system_pods.go:89] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Running
	I0110 10:06:41.793354  514451 system_pods.go:89] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running
	I0110 10:06:41.793358  514451 system_pods.go:89] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:06:41.793364  514451 system_pods.go:89] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running
	I0110 10:06:41.793370  514451 system_pods.go:89] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running
	I0110 10:06:41.793378  514451 system_pods.go:89] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:06:41.793386  514451 system_pods.go:89] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running
	I0110 10:06:41.793390  514451 system_pods.go:89] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Running
	I0110 10:06:41.793404  514451 system_pods.go:126] duration metric: took 1.480159658s to wait for k8s-apps to be running ...
	I0110 10:06:41.793412  514451 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:06:41.793475  514451 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:06:41.809616  514451 system_svc.go:56] duration metric: took 16.192124ms WaitForService to wait for kubelet
	I0110 10:06:41.809649  514451 kubeadm.go:587] duration metric: took 16.246940854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:06:41.809671  514451 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:06:41.812870  514451 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:06:41.812906  514451 node_conditions.go:123] node cpu capacity is 2
	I0110 10:06:41.812926  514451 node_conditions.go:105] duration metric: took 3.243636ms to run NodePressure ...
	I0110 10:06:41.812940  514451 start.go:242] waiting for startup goroutines ...
	I0110 10:06:41.812953  514451 start.go:247] waiting for cluster config update ...
	I0110 10:06:41.812965  514451 start.go:256] writing updated cluster config ...
	I0110 10:06:41.813255  514451 ssh_runner.go:195] Run: rm -f paused
	I0110 10:06:41.817803  514451 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:06:41.822023  514451 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ct6xj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:41.828281  514451 pod_ready.go:94] pod "coredns-7d764666f9-ct6xj" is "Ready"
	I0110 10:06:41.828309  514451 pod_ready.go:86] duration metric: took 6.25704ms for pod "coredns-7d764666f9-ct6xj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:41.831129  514451 pod_ready.go:83] waiting for pod "etcd-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:41.837121  514451 pod_ready.go:94] pod "etcd-embed-certs-219333" is "Ready"
	I0110 10:06:41.837156  514451 pod_ready.go:86] duration metric: took 5.999749ms for pod "etcd-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:41.840112  514451 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:41.846030  514451 pod_ready.go:94] pod "kube-apiserver-embed-certs-219333" is "Ready"
	I0110 10:06:41.846067  514451 pod_ready.go:86] duration metric: took 5.928782ms for pod "kube-apiserver-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:41.849070  514451 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:42.223581  514451 pod_ready.go:94] pod "kube-controller-manager-embed-certs-219333" is "Ready"
	I0110 10:06:42.223661  514451 pod_ready.go:86] duration metric: took 374.566148ms for pod "kube-controller-manager-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:42.423209  514451 pod_ready.go:83] waiting for pod "kube-proxy-gplbn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:42.822939  514451 pod_ready.go:94] pod "kube-proxy-gplbn" is "Ready"
	I0110 10:06:42.823028  514451 pod_ready.go:86] duration metric: took 399.719986ms for pod "kube-proxy-gplbn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:43.023383  514451 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:43.424053  514451 pod_ready.go:94] pod "kube-scheduler-embed-certs-219333" is "Ready"
	I0110 10:06:43.424080  514451 pod_ready.go:86] duration metric: took 400.616294ms for pod "kube-scheduler-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:06:43.424093  514451 pod_ready.go:40] duration metric: took 1.60624452s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:06:43.484570  514451 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:06:43.488001  514451 out.go:203] 
	W0110 10:06:43.490814  514451 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:06:43.493872  514451 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:06:43.497621  514451 out.go:179] * Done! kubectl is now configured to use "embed-certs-219333" cluster and "default" namespace by default
	I0110 10:06:41.594643  517877 out.go:252]   - Generating certificates and keys ...
	I0110 10:06:41.594796  517877 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:06:41.594902  517877 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:06:41.874898  517877 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:06:42.397478  517877 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:06:42.738674  517877 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:06:42.788626  517877 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:06:42.977459  517877 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:06:42.977840  517877 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-820203 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:06:43.260865  517877 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:06:43.261225  517877 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-820203 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:06:43.323239  517877 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:06:44.027513  517877 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:06:44.197928  517877 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:06:44.198220  517877 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:06:44.306296  517877 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:06:44.877184  517877 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:06:44.926393  517877 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:06:45.611016  517877 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:06:45.773954  517877 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:06:45.775011  517877 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:06:45.781542  517877 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:06:45.784956  517877 out.go:252]   - Booting up control plane ...
	I0110 10:06:45.785080  517877 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:06:45.785177  517877 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:06:45.785257  517877 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:06:45.803907  517877 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:06:45.804313  517877 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:06:45.812121  517877 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:06:45.812579  517877 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:06:45.812628  517877 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:06:45.940968  517877 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:06:45.941089  517877 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:06:46.942093  517877 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001404127s
	I0110 10:06:46.947170  517877 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 10:06:46.947675  517877 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I0110 10:06:46.948477  517877 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 10:06:46.949185  517877 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 10:06:48.464299  517877 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.515084948s
	I0110 10:06:50.248268  517877 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.298422319s
	I0110 10:06:51.950314  517877 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001825387s
	I0110 10:06:52.014790  517877 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 10:06:52.037975  517877 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 10:06:52.053910  517877 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 10:06:52.054173  517877 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-820203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 10:06:52.068858  517877 kubeadm.go:319] [bootstrap-token] Using token: kqwrnm.bwxdi6ohpdd1uzxe
	I0110 10:06:52.071896  517877 out.go:252]   - Configuring RBAC rules ...
	I0110 10:06:52.072041  517877 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 10:06:52.078343  517877 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 10:06:52.088652  517877 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 10:06:52.093442  517877 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 10:06:52.098180  517877 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 10:06:52.102564  517877 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 10:06:52.357484  517877 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 10:06:52.868845  517877 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 10:06:53.357944  517877 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 10:06:53.362813  517877 kubeadm.go:319] 
	I0110 10:06:53.362890  517877 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 10:06:53.362895  517877 kubeadm.go:319] 
	I0110 10:06:53.362972  517877 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 10:06:53.362976  517877 kubeadm.go:319] 
	I0110 10:06:53.363002  517877 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 10:06:53.363075  517877 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 10:06:53.363127  517877 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 10:06:53.363131  517877 kubeadm.go:319] 
	I0110 10:06:53.363185  517877 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 10:06:53.363189  517877 kubeadm.go:319] 
	I0110 10:06:53.363249  517877 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 10:06:53.363254  517877 kubeadm.go:319] 
	I0110 10:06:53.363306  517877 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 10:06:53.363383  517877 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 10:06:53.363451  517877 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 10:06:53.363455  517877 kubeadm.go:319] 
	I0110 10:06:53.363540  517877 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 10:06:53.363617  517877 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 10:06:53.363621  517877 kubeadm.go:319] 
	I0110 10:06:53.363704  517877 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token kqwrnm.bwxdi6ohpdd1uzxe \
	I0110 10:06:53.363808  517877 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e \
	I0110 10:06:53.363829  517877 kubeadm.go:319] 	--control-plane 
	I0110 10:06:53.363833  517877 kubeadm.go:319] 
	I0110 10:06:53.363918  517877 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 10:06:53.363922  517877 kubeadm.go:319] 
	I0110 10:06:53.364004  517877 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token kqwrnm.bwxdi6ohpdd1uzxe \
	I0110 10:06:53.364107  517877 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6cb971c204f1ad6be09e0d96e38ee50ab1cfd8bae74652632717e44753ffdf4e 
	I0110 10:06:53.365506  517877 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 10:06:53.365952  517877 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 10:06:53.366067  517877 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 10:06:53.366135  517877 cni.go:84] Creating CNI manager for ""
	I0110 10:06:53.366147  517877 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:06:53.370222  517877 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jan 10 10:06:40 embed-certs-219333 crio[839]: time="2026-01-10T10:06:40.560619885Z" level=info msg="Created container 49780b623eb062929ae8e89421f769e0fb24cda8d2518b399063a6f9107ca9e6: kube-system/coredns-7d764666f9-ct6xj/coredns" id=88b0168d-5eef-4642-ae19-a32c6325db02 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:06:40 embed-certs-219333 crio[839]: time="2026-01-10T10:06:40.565364922Z" level=info msg="Starting container: 49780b623eb062929ae8e89421f769e0fb24cda8d2518b399063a6f9107ca9e6" id=cd9b146f-2a9c-4b2c-a7e9-8ceb8913c3c3 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:06:40 embed-certs-219333 crio[839]: time="2026-01-10T10:06:40.576904835Z" level=info msg="Started container" PID=1782 containerID=49780b623eb062929ae8e89421f769e0fb24cda8d2518b399063a6f9107ca9e6 description=kube-system/coredns-7d764666f9-ct6xj/coredns id=cd9b146f-2a9c-4b2c-a7e9-8ceb8913c3c3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=95eae8e9f93d9897b9e74821c87c30bfef460fe4e835cc227adcf4320665f141
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.103095914Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5cad6e07-3263-476a-896f-c1f0870f790e name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.103204288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.112217277Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8f6b3f396189594330675d4209f17b1866dac3a824f6dbe60e042dd1d8893abb UID:a3f12a22-072b-44a0-84f9-98b212456e49 NetNS:/var/run/netns/109872d9-3f9d-484b-b52a-3e6caa732a08 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000065be0}] Aliases:map[]}"
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.113039417Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.133362974Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8f6b3f396189594330675d4209f17b1866dac3a824f6dbe60e042dd1d8893abb UID:a3f12a22-072b-44a0-84f9-98b212456e49 NetNS:/var/run/netns/109872d9-3f9d-484b-b52a-3e6caa732a08 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000065be0}] Aliases:map[]}"
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.133666773Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.136393298Z" level=info msg="Ran pod sandbox 8f6b3f396189594330675d4209f17b1866dac3a824f6dbe60e042dd1d8893abb with infra container: default/busybox/POD" id=5cad6e07-3263-476a-896f-c1f0870f790e name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.14027868Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e04eba45-07bd-419c-96cc-746fd9fb200c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.140612272Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e04eba45-07bd-419c-96cc-746fd9fb200c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.140812455Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e04eba45-07bd-419c-96cc-746fd9fb200c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.144136412Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=95f4682d-3840-4139-bd5a-cd0ea4b191da name=/runtime.v1.ImageService/PullImage
	Jan 10 10:06:44 embed-certs-219333 crio[839]: time="2026-01-10T10:06:44.144638662Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.344268254Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=95f4682d-3840-4139-bd5a-cd0ea4b191da name=/runtime.v1.ImageService/PullImage
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.345117028Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a818351-5576-4299-af57-9006b1ccb6b9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.350903982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=049bda19-d3c0-4ce5-bfd6-a4035dde2d51 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.356306249Z" level=info msg="Creating container: default/busybox/busybox" id=5cf8e007-9377-4df1-8fc7-192d8d72dc96 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.356427794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.361319089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.361912901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.389795357Z" level=info msg="Created container 9ea62b42c4262e8fb2fa9e47dc1af9dbec252bcc1f519288a2622a3d2c9860c4: default/busybox/busybox" id=5cf8e007-9377-4df1-8fc7-192d8d72dc96 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.39149082Z" level=info msg="Starting container: 9ea62b42c4262e8fb2fa9e47dc1af9dbec252bcc1f519288a2622a3d2c9860c4" id=682ba087-6628-4665-8170-567061770a94 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:06:46 embed-certs-219333 crio[839]: time="2026-01-10T10:06:46.396149046Z" level=info msg="Started container" PID=1839 containerID=9ea62b42c4262e8fb2fa9e47dc1af9dbec252bcc1f519288a2622a3d2c9860c4 description=default/busybox/busybox id=682ba087-6628-4665-8170-567061770a94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f6b3f396189594330675d4209f17b1866dac3a824f6dbe60e042dd1d8893abb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	9ea62b42c4262       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   8f6b3f3961895       busybox                                      default
	49780b623eb06       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      14 seconds ago      Running             coredns                   0                   95eae8e9f93d9       coredns-7d764666f9-ct6xj                     kube-system
	d64f2f2823e4e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   3cc372d65fe45       storage-provisioner                          kube-system
	c036dea388f31       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    25 seconds ago      Running             kindnet-cni               0                   1a024f269f8a6       kindnet-px8l8                                kube-system
	0ea9e75fb031b       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      28 seconds ago      Running             kube-proxy                0                   07e487e628e1a       kube-proxy-gplbn                             kube-system
	da505ade3e87f       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      39 seconds ago      Running             kube-controller-manager   0                   fabcc61d0a95c       kube-controller-manager-embed-certs-219333   kube-system
	1560032f0fb28       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      39 seconds ago      Running             kube-apiserver            0                   d54760b111f17       kube-apiserver-embed-certs-219333            kube-system
	9b4f2b85425d2       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      39 seconds ago      Running             etcd                      0                   79ac5d405b110       etcd-embed-certs-219333                      kube-system
	1aa0518bbc158       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      39 seconds ago      Running             kube-scheduler            0                   c00984ee01735       kube-scheduler-embed-certs-219333            kube-system
	
	
	==> coredns [49780b623eb062929ae8e89421f769e0fb24cda8d2518b399063a6f9107ca9e6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43474 - 46372 "HINFO IN 3574437164249732773.6327032523728676760. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021496167s
	
	
	==> describe nodes <==
	Name:               embed-certs-219333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-219333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=embed-certs-219333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-219333
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:06:50 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:06:50 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:06:50 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:06:50 +0000   Sat, 10 Jan 2026 10:06:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-219333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                d1d7a876-2a30-486f-839c-2eda89461ed8
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-ct6xj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-embed-certs-219333                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-px8l8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-embed-certs-219333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-219333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-gplbn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-embed-certs-219333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node embed-certs-219333 event: Registered Node embed-certs-219333 in Controller
	
	
	==> dmesg <==
	[ +27.835142] overlayfs: idmapped layers are currently not supported
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9b4f2b85425d2c2cda62a664c22b35f169f4ffee116f3a0dc61a300b1db8a497] <==
	{"level":"info","ts":"2026-01-10T10:06:15.225241Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:06:15.772551Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T10:06:15.772682Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T10:06:15.772760Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T10:06:15.772815Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:06:15.772899Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:15.776551Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:15.776640Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:06:15.776685Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:15.776735Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:15.780787Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-219333 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:06:15.781012Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:06:15.781242Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:15.792516Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:06:15.793395Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:06:15.804582Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:15.805374Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:15.805484Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T10:06:15.808574Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T10:06:15.808829Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:15.794043Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:06:15.794086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:06:15.820657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:06:15.810745Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T10:06:15.837262Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 10:06:54 up  2:49,  0 user,  load average: 2.29, 1.70, 1.91
	Linux embed-certs-219333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c036dea388f312908f741601f9cd0aab0cbd9b4b686436a9fbf2af38df1fe211] <==
	I0110 10:06:29.429358       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:06:29.433221       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:06:29.433428       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:06:29.433470       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:06:29.516621       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:06:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:06:29.718364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:06:29.718473       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:06:29.718537       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:06:29.719657       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 10:06:29.918754       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:06:29.918846       1 metrics.go:72] Registering metrics
	I0110 10:06:29.918928       1 controller.go:711] "Syncing nftables rules"
	I0110 10:06:39.719158       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:06:39.719316       1 main.go:301] handling current node
	I0110 10:06:49.718536       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:06:49.718587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1560032f0fb28aa32e571b6c0a244f6ca3b12ac0212c8507d669f01edab0f811] <==
	E0110 10:06:17.750924       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0110 10:06:17.784927       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 10:06:17.830932       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:06:17.836158       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:17.836230       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:06:17.847422       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:17.847515       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:06:17.955430       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:06:18.541909       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 10:06:18.549998       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 10:06:18.550084       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:06:19.293983       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:06:19.347449       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:06:19.460802       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 10:06:19.478422       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 10:06:19.479686       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:06:19.485784       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:06:19.686677       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:06:20.274626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:06:20.297721       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 10:06:20.313018       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 10:06:25.378804       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:25.442641       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:25.661020       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 10:06:25.711626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [da505ade3e87ff62fad85cf305650c134a24089e2497e104a9ba61006f917f35] <==
	I0110 10:06:24.521407       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.521748       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.521759       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.521770       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.521778       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.521783       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522124       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.532703       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:06:24.532735       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:06:24.522130       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522138       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522144       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522150       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522201       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522207       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.538956       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522156       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522162       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522167       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522183       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522188       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.522195       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:24.554135       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-219333" podCIDRs=["10.244.0.0/24"]
	I0110 10:06:24.590338       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:44.499868       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0ea9e75fb031b00a82b4f650426605aa3a796058beaf020db0effd13420cd7cc] <==
	I0110 10:06:26.706222       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:06:26.862542       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:06:26.967639       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:26.967669       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:06:26.967753       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:06:27.034182       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:06:27.034233       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:06:27.043445       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:06:27.044891       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:06:27.044917       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:06:27.046301       1 config.go:200] "Starting service config controller"
	I0110 10:06:27.046323       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:06:27.046340       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:06:27.046344       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:06:27.046354       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:06:27.046358       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:06:27.057200       1 config.go:309] "Starting node config controller"
	I0110 10:06:27.057219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:06:27.057226       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:06:27.146684       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:06:27.146718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:06:27.146750       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1aa0518bbc158608a0efad1f412336eb3fe9c9f4f9836f21e5b6901207c29499] <==
	E0110 10:06:17.744668       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 10:06:17.744774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 10:06:17.745017       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 10:06:17.745112       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 10:06:17.745191       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:06:17.745276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 10:06:17.745432       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 10:06:17.745500       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 10:06:17.745533       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 10:06:18.561835       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 10:06:18.576589       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 10:06:18.626039       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 10:06:18.628668       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 10:06:18.676848       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:06:18.680064       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 10:06:18.740881       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 10:06:18.747074       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 10:06:18.781992       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 10:06:18.796488       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 10:06:18.821668       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 10:06:18.830941       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 10:06:19.004713       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 10:06:19.037680       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 10:06:19.064726       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	I0110 10:06:21.805515       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: I0110 10:06:26.005971    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b918c202-c46b-4271-a53d-c2f3e0597f24-lib-modules\") pod \"kindnet-px8l8\" (UID: \"b918c202-c46b-4271-a53d-c2f3e0597f24\") " pod="kube-system/kindnet-px8l8"
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: I0110 10:06:26.005991    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b42edc75-1624-420a-80b3-4472f2766114-lib-modules\") pod \"kube-proxy-gplbn\" (UID: \"b42edc75-1624-420a-80b3-4472f2766114\") " pod="kube-system/kube-proxy-gplbn"
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: I0110 10:06:26.006009    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnmwz\" (UniqueName: \"kubernetes.io/projected/b918c202-c46b-4271-a53d-c2f3e0597f24-kube-api-access-gnmwz\") pod \"kindnet-px8l8\" (UID: \"b918c202-c46b-4271-a53d-c2f3e0597f24\") " pod="kube-system/kindnet-px8l8"
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: I0110 10:06:26.006028    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b42edc75-1624-420a-80b3-4472f2766114-xtables-lock\") pod \"kube-proxy-gplbn\" (UID: \"b42edc75-1624-420a-80b3-4472f2766114\") " pod="kube-system/kube-proxy-gplbn"
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: I0110 10:06:26.230248    1305 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: W0110 10:06:26.310767    1305 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/crio-1a024f269f8a6adbe13c266163382c0e63a9c9d42660ecd80a3892a47414958e WatchSource:0}: Error finding container 1a024f269f8a6adbe13c266163382c0e63a9c9d42660ecd80a3892a47414958e: Status 404 returned error can't find the container with id 1a024f269f8a6adbe13c266163382c0e63a9c9d42660ecd80a3892a47414958e
	Jan 10 10:06:26 embed-certs-219333 kubelet[1305]: W0110 10:06:26.542303    1305 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/crio-07e487e628e1a782bda481f94747a71022d770df761d7955623caa9ac99dc274 WatchSource:0}: Error finding container 07e487e628e1a782bda481f94747a71022d770df761d7955623caa9ac99dc274: Status 404 returned error can't find the container with id 07e487e628e1a782bda481f94747a71022d770df761d7955623caa9ac99dc274
	Jan 10 10:06:29 embed-certs-219333 kubelet[1305]: I0110 10:06:29.426925    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gplbn" podStartSLOduration=4.426908272 podStartE2EDuration="4.426908272s" podCreationTimestamp="2026-01-10 10:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:06:27.502135484 +0000 UTC m=+7.396041731" watchObservedRunningTime="2026-01-10 10:06:29.426908272 +0000 UTC m=+9.320814527"
	Jan 10 10:06:29 embed-certs-219333 kubelet[1305]: I0110 10:06:29.427590    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-px8l8" podStartSLOduration=1.400571984 podStartE2EDuration="4.427578812s" podCreationTimestamp="2026-01-10 10:06:25 +0000 UTC" firstStartedPulling="2026-01-10 10:06:26.3317241 +0000 UTC m=+6.225630347" lastFinishedPulling="2026-01-10 10:06:29.358730928 +0000 UTC m=+9.252637175" observedRunningTime="2026-01-10 10:06:29.426798011 +0000 UTC m=+9.320704266" watchObservedRunningTime="2026-01-10 10:06:29.427578812 +0000 UTC m=+9.321485092"
	Jan 10 10:06:30 embed-certs-219333 kubelet[1305]: E0110 10:06:30.689511    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-219333" containerName="etcd"
	Jan 10 10:06:35 embed-certs-219333 kubelet[1305]: E0110 10:06:35.793697    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-219333" containerName="kube-controller-manager"
	Jan 10 10:06:35 embed-certs-219333 kubelet[1305]: E0110 10:06:35.984792    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-219333" containerName="kube-apiserver"
	Jan 10 10:06:36 embed-certs-219333 kubelet[1305]: E0110 10:06:36.036340    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-219333" containerName="kube-scheduler"
	Jan 10 10:06:39 embed-certs-219333 kubelet[1305]: I0110 10:06:39.933908    1305 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 10:06:40 embed-certs-219333 kubelet[1305]: I0110 10:06:40.033065    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7202fe21-4df1-4fd6-aeab-b78de21d43f9-config-volume\") pod \"coredns-7d764666f9-ct6xj\" (UID: \"7202fe21-4df1-4fd6-aeab-b78de21d43f9\") " pod="kube-system/coredns-7d764666f9-ct6xj"
	Jan 10 10:06:40 embed-certs-219333 kubelet[1305]: I0110 10:06:40.033134    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ef23f9bc-2e08-4b78-8b1c-01cec8e469f1-tmp\") pod \"storage-provisioner\" (UID: \"ef23f9bc-2e08-4b78-8b1c-01cec8e469f1\") " pod="kube-system/storage-provisioner"
	Jan 10 10:06:40 embed-certs-219333 kubelet[1305]: I0110 10:06:40.033180    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg5f5\" (UniqueName: \"kubernetes.io/projected/7202fe21-4df1-4fd6-aeab-b78de21d43f9-kube-api-access-xg5f5\") pod \"coredns-7d764666f9-ct6xj\" (UID: \"7202fe21-4df1-4fd6-aeab-b78de21d43f9\") " pod="kube-system/coredns-7d764666f9-ct6xj"
	Jan 10 10:06:40 embed-certs-219333 kubelet[1305]: I0110 10:06:40.033209    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckk9\" (UniqueName: \"kubernetes.io/projected/ef23f9bc-2e08-4b78-8b1c-01cec8e469f1-kube-api-access-jckk9\") pod \"storage-provisioner\" (UID: \"ef23f9bc-2e08-4b78-8b1c-01cec8e469f1\") " pod="kube-system/storage-provisioner"
	Jan 10 10:06:40 embed-certs-219333 kubelet[1305]: E0110 10:06:40.690548    1305 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-219333" containerName="etcd"
	Jan 10 10:06:41 embed-certs-219333 kubelet[1305]: E0110 10:06:41.470009    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ct6xj" containerName="coredns"
	Jan 10 10:06:41 embed-certs-219333 kubelet[1305]: I0110 10:06:41.507082    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.507064807999999 podStartE2EDuration="14.507064808s" podCreationTimestamp="2026-01-10 10:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:06:41.489983064 +0000 UTC m=+21.383889311" watchObservedRunningTime="2026-01-10 10:06:41.507064808 +0000 UTC m=+21.400971071"
	Jan 10 10:06:42 embed-certs-219333 kubelet[1305]: E0110 10:06:42.473213    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ct6xj" containerName="coredns"
	Jan 10 10:06:43 embed-certs-219333 kubelet[1305]: E0110 10:06:43.475742    1305 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ct6xj" containerName="coredns"
	Jan 10 10:06:43 embed-certs-219333 kubelet[1305]: I0110 10:06:43.793477    1305 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ct6xj" podStartSLOduration=17.793459659 podStartE2EDuration="17.793459659s" podCreationTimestamp="2026-01-10 10:06:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:06:41.508038351 +0000 UTC m=+21.401944606" watchObservedRunningTime="2026-01-10 10:06:43.793459659 +0000 UTC m=+23.687365906"
	Jan 10 10:06:43 embed-certs-219333 kubelet[1305]: I0110 10:06:43.874229    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpp8\" (UniqueName: \"kubernetes.io/projected/a3f12a22-072b-44a0-84f9-98b212456e49-kube-api-access-7hpp8\") pod \"busybox\" (UID: \"a3f12a22-072b-44a0-84f9-98b212456e49\") " pod="default/busybox"
	
	
	==> storage-provisioner [d64f2f2823e4ed2bfdd2157adb049076d9fb64dca2493da0eb0cbb99f57bf502] <==
	I0110 10:06:40.479030       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:06:40.502464       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:06:40.503028       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:06:40.509138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:40.520078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:06:40.520481       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:06:40.522925       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-219333_42efc7c2-a114-4099-af11-c9032b22aaca!
	I0110 10:06:40.528395       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17aa7d93-7fb8-45e3-85a5-4943a2914558", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-219333_42efc7c2-a114-4099-af11-c9032b22aaca became leader
	W0110 10:06:40.534422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:40.538950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:06:40.625052       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-219333_42efc7c2-a114-4099-af11-c9032b22aaca!
	W0110 10:06:42.542763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:42.548461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:44.551385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:44.557071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:46.560805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:46.568874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:48.572453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:48.577479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:50.580620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:50.587429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:52.590023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:52.596465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:54.601385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:06:54.608908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-219333 -n embed-certs-219333
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-219333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (288.909479ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:07:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-820203 describe deploy/metrics-server -n kube-system: exit status 1 (124.475413ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-820203 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-820203
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-820203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08",
	        "Created": "2026-01-10T10:06:35.311708414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518264,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:06:35.374082681Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/hostname",
	        "HostsPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/hosts",
	        "LogPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08-json.log",
	        "Name": "/default-k8s-diff-port-820203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-820203:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-820203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08",
	                "LowerDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-820203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-820203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-820203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-820203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-820203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "82b4499e6ed3b52d9beb5433a29063c32a53ac8b783bcc04b7d4781568d0354e",
	            "SandboxKey": "/var/run/docker/netns/82b4499e6ed3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-820203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:46:1d:36:63:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6955d7ca364871106ab81e8846bbb3fa5f63fcfbf0bbc67db73305008bd736d",
	                    "EndpointID": "5e6697985b8a4ef0a95c67c3d199ac6d0c52384fd2313eda32abf4009ff21429",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-820203",
	                        "72463dca0fe3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-820203 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-820203 logs -n 25: (1.407847476s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:02 UTC │
	│ start   │ -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:02 UTC │ 10 Jan 26 10:03 UTC │
	│ image   │ old-k8s-version-729486 image list --format=json                                                                                                                                                                                               │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ pause   │ -p old-k8s-version-729486 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │                     │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                                                                                     │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                                                                                                  │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                                                                                               │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ stop    │ -p embed-certs-219333 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:07:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:07:08.267790  521204 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:07:08.267985  521204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:07:08.268011  521204 out.go:374] Setting ErrFile to fd 2...
	I0110 10:07:08.268031  521204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:07:08.268723  521204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:07:08.269148  521204 out.go:368] Setting JSON to false
	I0110 10:07:08.270051  521204 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10178,"bootTime":1768029451,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:07:08.270120  521204 start.go:143] virtualization:  
	I0110 10:07:08.273142  521204 out.go:179] * [embed-certs-219333] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:07:08.276943  521204 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:07:08.277094  521204 notify.go:221] Checking for updates...
	I0110 10:07:08.283116  521204 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:07:08.286034  521204 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:08.288917  521204 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:07:08.291834  521204 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:07:08.294676  521204 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:07:08.298036  521204 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:08.298643  521204 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:07:08.326753  521204 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:07:08.326882  521204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:07:08.385185  521204 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:07:08.375465483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:07:08.385291  521204 docker.go:319] overlay module found
	I0110 10:07:08.388348  521204 out.go:179] * Using the docker driver based on existing profile
	I0110 10:07:08.391136  521204 start.go:309] selected driver: docker
	I0110 10:07:08.391159  521204 start.go:928] validating driver "docker" against &{Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:08.391267  521204 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:07:08.392010  521204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:07:08.447747  521204 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:07:08.438548345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:07:08.448082  521204 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:08.448121  521204 cni.go:84] Creating CNI manager for ""
	I0110 10:07:08.448180  521204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:07:08.448221  521204 start.go:353] cluster config:
	{Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:08.451365  521204 out.go:179] * Starting "embed-certs-219333" primary control-plane node in "embed-certs-219333" cluster
	I0110 10:07:08.454244  521204 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:07:08.457136  521204 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:07:08.459977  521204 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:07:08.460020  521204 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:07:08.460034  521204 cache.go:65] Caching tarball of preloaded images
	I0110 10:07:08.460044  521204 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:07:08.460133  521204 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:07:08.460144  521204 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:07:08.460252  521204 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/config.json ...
	I0110 10:07:08.480998  521204 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:07:08.481023  521204 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:07:08.481039  521204 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:07:08.481072  521204 start.go:360] acquireMachinesLock for embed-certs-219333: {Name:mk194110ed8c34314eec25e22167b583e391cf6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:07:08.481132  521204 start.go:364] duration metric: took 37.531µs to acquireMachinesLock for "embed-certs-219333"
	I0110 10:07:08.481156  521204 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:07:08.481162  521204 fix.go:54] fixHost starting: 
	I0110 10:07:08.481431  521204 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:07:08.500769  521204 fix.go:112] recreateIfNeeded on embed-certs-219333: state=Stopped err=<nil>
	W0110 10:07:08.500801  521204 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 10:07:06.071298  517877 node_ready.go:57] node "default-k8s-diff-port-820203" has "Ready":"False" status (will retry)
	W0110 10:07:08.071555  517877 node_ready.go:57] node "default-k8s-diff-port-820203" has "Ready":"False" status (will retry)
	I0110 10:07:08.504013  521204 out.go:252] * Restarting existing docker container for "embed-certs-219333" ...
	I0110 10:07:08.504116  521204 cli_runner.go:164] Run: docker start embed-certs-219333
	I0110 10:07:08.782516  521204 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:07:08.801253  521204 kic.go:430] container "embed-certs-219333" state is running.
	I0110 10:07:08.801634  521204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-219333
	I0110 10:07:08.828167  521204 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/config.json ...
	I0110 10:07:08.828422  521204 machine.go:94] provisionDockerMachine start ...
	I0110 10:07:08.828514  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:08.850936  521204 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:08.851274  521204 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I0110 10:07:08.851291  521204 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:07:08.851896  521204 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:07:12.016520  521204 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-219333
	
	I0110 10:07:12.016549  521204 ubuntu.go:182] provisioning hostname "embed-certs-219333"
	I0110 10:07:12.016642  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:12.041458  521204 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:12.041786  521204 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I0110 10:07:12.041804  521204 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-219333 && echo "embed-certs-219333" | sudo tee /etc/hostname
	I0110 10:07:12.201976  521204 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-219333
	
	I0110 10:07:12.202112  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:12.219408  521204 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:12.219733  521204 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I0110 10:07:12.219749  521204 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-219333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-219333/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-219333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:07:12.384768  521204 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:07:12.384795  521204 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:07:12.384834  521204 ubuntu.go:190] setting up certificates
	I0110 10:07:12.384843  521204 provision.go:84] configureAuth start
	I0110 10:07:12.384918  521204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-219333
	I0110 10:07:12.401180  521204 provision.go:143] copyHostCerts
	I0110 10:07:12.401263  521204 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:07:12.401287  521204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:07:12.401371  521204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:07:12.401479  521204 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:07:12.401491  521204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:07:12.401520  521204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:07:12.401590  521204 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:07:12.401600  521204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:07:12.401625  521204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:07:12.401684  521204 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.embed-certs-219333 san=[127.0.0.1 192.168.76.2 embed-certs-219333 localhost minikube]
	I0110 10:07:12.588154  521204 provision.go:177] copyRemoteCerts
	I0110 10:07:12.588272  521204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:07:12.588342  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:12.606526  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:12.712935  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:07:12.730042  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:07:12.750265  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 10:07:12.767792  521204 provision.go:87] duration metric: took 382.918628ms to configureAuth
	I0110 10:07:12.767824  521204 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:07:12.768045  521204 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:12.768165  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:12.784848  521204 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:12.785183  521204 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33449 <nil> <nil>}
	I0110 10:07:12.785202  521204 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:07:13.173810  521204 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:07:13.173832  521204 machine.go:97] duration metric: took 4.34539323s to provisionDockerMachine
	I0110 10:07:13.173844  521204 start.go:293] postStartSetup for "embed-certs-219333" (driver="docker")
	I0110 10:07:13.173855  521204 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:07:13.173922  521204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:07:13.173973  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:13.197450  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	W0110 10:07:10.570395  517877 node_ready.go:57] node "default-k8s-diff-port-820203" has "Ready":"False" status (will retry)
	I0110 10:07:11.570786  517877 node_ready.go:49] node "default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:11.570815  517877 node_ready.go:38] duration metric: took 12.503080696s for node "default-k8s-diff-port-820203" to be "Ready" ...
	I0110 10:07:11.570828  517877 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:07:11.570892  517877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:07:11.589130  517877 api_server.go:72] duration metric: took 13.68989631s to wait for apiserver process to appear ...
	I0110 10:07:11.589153  517877 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:07:11.589172  517877 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 10:07:11.599566  517877 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 10:07:11.602498  517877 api_server.go:141] control plane version: v1.35.0
	I0110 10:07:11.602532  517877 api_server.go:131] duration metric: took 13.370935ms to wait for apiserver health ...
	I0110 10:07:11.602542  517877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:07:11.607672  517877 system_pods.go:59] 8 kube-system pods found
	I0110 10:07:11.607714  517877 system_pods.go:61] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:11.607722  517877 system_pods.go:61] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running
	I0110 10:07:11.607728  517877 system_pods.go:61] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running
	I0110 10:07:11.607735  517877 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:11.607740  517877 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running
	I0110 10:07:11.607745  517877 system_pods.go:61] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running
	I0110 10:07:11.607750  517877 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running
	I0110 10:07:11.607761  517877 system_pods.go:61] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:11.607774  517877 system_pods.go:74] duration metric: took 5.224362ms to wait for pod list to return data ...
	I0110 10:07:11.607783  517877 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:07:11.614671  517877 default_sa.go:45] found service account: "default"
	I0110 10:07:11.614702  517877 default_sa.go:55] duration metric: took 6.90924ms for default service account to be created ...
	I0110 10:07:11.614713  517877 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:07:11.618423  517877 system_pods.go:86] 8 kube-system pods found
	I0110 10:07:11.618456  517877 system_pods.go:89] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:11.618463  517877 system_pods.go:89] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running
	I0110 10:07:11.618469  517877 system_pods.go:89] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running
	I0110 10:07:11.618479  517877 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:11.618484  517877 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running
	I0110 10:07:11.618489  517877 system_pods.go:89] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running
	I0110 10:07:11.618494  517877 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running
	I0110 10:07:11.618500  517877 system_pods.go:89] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:11.618532  517877 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 10:07:11.846596  517877 system_pods.go:86] 8 kube-system pods found
	I0110 10:07:11.846632  517877 system_pods.go:89] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:11.846640  517877 system_pods.go:89] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running
	I0110 10:07:11.846647  517877 system_pods.go:89] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running
	I0110 10:07:11.846673  517877 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:11.846684  517877 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running
	I0110 10:07:11.846691  517877 system_pods.go:89] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running
	I0110 10:07:11.846699  517877 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running
	I0110 10:07:11.846706  517877 system_pods.go:89] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:12.226699  517877 system_pods.go:86] 8 kube-system pods found
	I0110 10:07:12.226732  517877 system_pods.go:89] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Running
	I0110 10:07:12.226740  517877 system_pods.go:89] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running
	I0110 10:07:12.226745  517877 system_pods.go:89] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running
	I0110 10:07:12.226749  517877 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running
	I0110 10:07:12.226754  517877 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running
	I0110 10:07:12.226759  517877 system_pods.go:89] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running
	I0110 10:07:12.226763  517877 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running
	I0110 10:07:12.226768  517877 system_pods.go:89] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Running
	I0110 10:07:12.226777  517877 system_pods.go:126] duration metric: took 612.055475ms to wait for k8s-apps to be running ...
	I0110 10:07:12.226783  517877 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:07:12.226838  517877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:07:12.246909  517877 system_svc.go:56] duration metric: took 20.115661ms WaitForService to wait for kubelet
	I0110 10:07:12.246940  517877 kubeadm.go:587] duration metric: took 14.347710956s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:12.246959  517877 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:07:12.250346  517877 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:07:12.250377  517877 node_conditions.go:123] node cpu capacity is 2
	I0110 10:07:12.250391  517877 node_conditions.go:105] duration metric: took 3.426792ms to run NodePressure ...
	I0110 10:07:12.250404  517877 start.go:242] waiting for startup goroutines ...
	I0110 10:07:12.250416  517877 start.go:247] waiting for cluster config update ...
	I0110 10:07:12.250427  517877 start.go:256] writing updated cluster config ...
	I0110 10:07:12.250710  517877 ssh_runner.go:195] Run: rm -f paused
	I0110 10:07:12.254585  517877 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:07:12.258316  517877 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5kgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.262872  517877 pod_ready.go:94] pod "coredns-7d764666f9-5kgtf" is "Ready"
	I0110 10:07:12.262900  517877 pod_ready.go:86] duration metric: took 4.55932ms for pod "coredns-7d764666f9-5kgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.265057  517877 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.275148  517877 pod_ready.go:94] pod "etcd-default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:12.275186  517877 pod_ready.go:86] duration metric: took 10.098286ms for pod "etcd-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.277626  517877 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.281990  517877 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:12.282025  517877 pod_ready.go:86] duration metric: took 4.365676ms for pod "kube-apiserver-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.290711  517877 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.658849  517877 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:12.658875  517877 pod_ready.go:86] duration metric: took 368.138313ms for pod "kube-controller-manager-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:12.859102  517877 pod_ready.go:83] waiting for pod "kube-proxy-h677z" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:13.258265  517877 pod_ready.go:94] pod "kube-proxy-h677z" is "Ready"
	I0110 10:07:13.258295  517877 pod_ready.go:86] duration metric: took 399.167293ms for pod "kube-proxy-h677z" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:13.459277  517877 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:13.858440  517877 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:13.858466  517877 pod_ready.go:86] duration metric: took 399.15853ms for pod "kube-scheduler-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:13.858479  517877 pod_ready.go:40] duration metric: took 1.603861692s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:07:13.938922  517877 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:07:13.942176  517877 out.go:203] 
	W0110 10:07:13.945183  517877 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:07:13.948593  517877 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:07:13.952453  517877 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-820203" cluster and "default" namespace by default
	I0110 10:07:13.300333  521204 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:07:13.303743  521204 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:07:13.303774  521204 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:07:13.303786  521204 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:07:13.303847  521204 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:07:13.303934  521204 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:07:13.304042  521204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:07:13.311930  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:07:13.329682  521204 start.go:296] duration metric: took 155.822709ms for postStartSetup
	I0110 10:07:13.329780  521204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:07:13.329831  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:13.348600  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:13.453667  521204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:07:13.460121  521204 fix.go:56] duration metric: took 4.978953513s for fixHost
	I0110 10:07:13.460177  521204 start.go:83] releasing machines lock for "embed-certs-219333", held for 4.979031512s
	I0110 10:07:13.460263  521204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-219333
	I0110 10:07:13.476652  521204 ssh_runner.go:195] Run: cat /version.json
	I0110 10:07:13.476709  521204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:07:13.476720  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:13.476765  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:13.494425  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:13.495124  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:13.699535  521204 ssh_runner.go:195] Run: systemctl --version
	I0110 10:07:13.706110  521204 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:07:13.746267  521204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:07:13.750681  521204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:07:13.750761  521204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:07:13.758584  521204 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:07:13.758609  521204 start.go:496] detecting cgroup driver to use...
	I0110 10:07:13.758662  521204 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:07:13.758737  521204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:07:13.774726  521204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:07:13.789162  521204 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:07:13.789227  521204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:07:13.805759  521204 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:07:13.819656  521204 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:07:13.977432  521204 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:07:14.158237  521204 docker.go:234] disabling docker service ...
	I0110 10:07:14.159208  521204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:07:14.191737  521204 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:07:14.211526  521204 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:07:14.355195  521204 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:07:14.489403  521204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:07:14.502776  521204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:07:14.518495  521204 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:07:14.518563  521204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.528282  521204 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:07:14.528350  521204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.538631  521204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.549327  521204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.569217  521204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:07:14.585500  521204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.595178  521204 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.604444  521204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:14.614548  521204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:07:14.622295  521204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:07:14.629956  521204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:14.749596  521204 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:07:14.919245  521204 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:07:14.919316  521204 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:07:14.923227  521204 start.go:574] Will wait 60s for crictl version
	I0110 10:07:14.923329  521204 ssh_runner.go:195] Run: which crictl
	I0110 10:07:14.926886  521204 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:07:14.952742  521204 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:07:14.952825  521204 ssh_runner.go:195] Run: crio --version
	I0110 10:07:14.982382  521204 ssh_runner.go:195] Run: crio --version
	I0110 10:07:15.028580  521204 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:07:15.035164  521204 cli_runner.go:164] Run: docker network inspect embed-certs-219333 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:07:15.054351  521204 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:07:15.059237  521204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:07:15.070674  521204 kubeadm.go:884] updating cluster {Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:07:15.070799  521204 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:07:15.070873  521204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:07:15.113714  521204 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:07:15.113740  521204 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:07:15.113803  521204 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:07:15.142248  521204 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:07:15.142275  521204 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:07:15.142284  521204 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:07:15.142386  521204 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-219333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:07:15.142471  521204 ssh_runner.go:195] Run: crio config
	I0110 10:07:15.206467  521204 cni.go:84] Creating CNI manager for ""
	I0110 10:07:15.206492  521204 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:07:15.206532  521204 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:07:15.206563  521204 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-219333 NodeName:embed-certs-219333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:07:15.206704  521204 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-219333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:07:15.206783  521204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:07:15.214763  521204 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:07:15.214891  521204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:07:15.222805  521204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0110 10:07:15.236403  521204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:07:15.252014  521204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0110 10:07:15.267596  521204 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:07:15.271293  521204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:07:15.281516  521204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:15.401661  521204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:07:15.420901  521204 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333 for IP: 192.168.76.2
	I0110 10:07:15.420924  521204 certs.go:195] generating shared ca certs ...
	I0110 10:07:15.420941  521204 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:15.421140  521204 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:07:15.421215  521204 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:07:15.421231  521204 certs.go:257] generating profile certs ...
	I0110 10:07:15.421343  521204 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/client.key
	I0110 10:07:15.421443  521204 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key.a4f0d3e0
	I0110 10:07:15.421537  521204 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.key
	I0110 10:07:15.421675  521204 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:07:15.421731  521204 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:07:15.421746  521204 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:07:15.421788  521204 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:07:15.421838  521204 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:07:15.421872  521204 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:07:15.421944  521204 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:07:15.422568  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:07:15.444619  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:07:15.465905  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:07:15.490062  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:07:15.508320  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 10:07:15.531914  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 10:07:15.559217  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:07:15.584216  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/embed-certs-219333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:07:15.627429  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:07:15.646343  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:07:15.665834  521204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:07:15.688048  521204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:07:15.702412  521204 ssh_runner.go:195] Run: openssl version
	I0110 10:07:15.709085  521204 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:07:15.717548  521204 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:07:15.734034  521204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:07:15.738111  521204 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:07:15.738183  521204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:07:15.781229  521204 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:07:15.788645  521204 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:07:15.796104  521204 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:07:15.803993  521204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:07:15.807843  521204 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:07:15.807960  521204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:07:15.849869  521204 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:07:15.857617  521204 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:15.866325  521204 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:07:15.875780  521204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:15.881196  521204 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:15.881302  521204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:15.924656  521204 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:07:15.932732  521204 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:07:15.937565  521204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:07:15.979643  521204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:07:16.021121  521204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:07:16.067957  521204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:07:16.152603  521204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:07:16.249758  521204 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:07:16.356349  521204 kubeadm.go:401] StartCluster: {Name:embed-certs-219333 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-219333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:16.356507  521204 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:07:16.356617  521204 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:07:16.443116  521204 cri.go:96] found id: "d60025e9eaf7adb52700e8aca2a8a63d05e321eb59e4e696674205d1577f88e6"
	I0110 10:07:16.443192  521204 cri.go:96] found id: "23d9f7d67b99820f29a228986440deb42a7643b108034bd10629d2cd7e74d814"
	I0110 10:07:16.443212  521204 cri.go:96] found id: "34471ac06a0868183f7bbf12a60eede49ca6265f4f8b78f35058634a2296e139"
	I0110 10:07:16.443233  521204 cri.go:96] found id: "cd78f3af49f4d143a1ec414506ec5513f9eff8215806fa0cf31e02e797a439b2"
	I0110 10:07:16.443262  521204 cri.go:96] found id: ""
	I0110 10:07:16.443344  521204 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:07:16.499313  521204 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:07:16Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:07:16.499455  521204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:07:16.519220  521204 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:07:16.519290  521204 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:07:16.519358  521204 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:07:16.538122  521204 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:07:16.538750  521204 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-219333" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:16.539084  521204 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-219333" cluster setting kubeconfig missing "embed-certs-219333" context setting]
	I0110 10:07:16.539591  521204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:16.541618  521204 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:07:16.553939  521204 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:07:16.554016  521204 kubeadm.go:602] duration metric: took 34.704983ms to restartPrimaryControlPlane
	I0110 10:07:16.554056  521204 kubeadm.go:403] duration metric: took 197.717806ms to StartCluster
	I0110 10:07:16.554087  521204 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:16.554167  521204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:16.555423  521204 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:16.555717  521204 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:07:16.555876  521204 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:07:16.556336  521204 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-219333"
	I0110 10:07:16.556458  521204 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-219333"
	W0110 10:07:16.556485  521204 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:07:16.556542  521204 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:07:16.557061  521204 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:07:16.557272  521204 addons.go:70] Setting dashboard=true in profile "embed-certs-219333"
	I0110 10:07:16.557322  521204 addons.go:239] Setting addon dashboard=true in "embed-certs-219333"
	W0110 10:07:16.557345  521204 addons.go:248] addon dashboard should already be in state true
	I0110 10:07:16.557384  521204 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:07:16.557743  521204 addons.go:70] Setting default-storageclass=true in profile "embed-certs-219333"
	I0110 10:07:16.557759  521204 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-219333"
	I0110 10:07:16.557993  521204 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:07:16.558228  521204 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:07:16.556139  521204 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:16.563101  521204 out.go:179] * Verifying Kubernetes components...
	I0110 10:07:16.568638  521204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:16.606837  521204 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:07:16.609783  521204 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:07:16.609813  521204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:07:16.609884  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:16.628790  521204 addons.go:239] Setting addon default-storageclass=true in "embed-certs-219333"
	W0110 10:07:16.628813  521204 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:07:16.628839  521204 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:07:16.629271  521204 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:07:16.637321  521204 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:07:16.640323  521204 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:07:16.642976  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:07:16.643002  521204 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:07:16.643090  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:16.667936  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:16.674071  521204 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:07:16.674089  521204 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:07:16.674149  521204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:07:16.706164  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:16.707189  521204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:07:17.025171  521204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:07:17.045117  521204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:07:17.048455  521204 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:07:17.106962  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:07:17.107038  521204 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:07:17.181430  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:07:17.181505  521204 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:07:17.262776  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:07:17.262806  521204 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:07:17.279697  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:07:17.279725  521204 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:07:17.295281  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:07:17.295320  521204 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:07:17.310660  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:07:17.310709  521204 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:07:17.325840  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:07:17.325867  521204 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:07:17.350303  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:07:17.350377  521204 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:07:17.385980  521204 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:07:17.386055  521204 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:07:17.409590  521204 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:07:20.158996  521204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.133741383s)
	I0110 10:07:21.324215  521204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.27900692s)
	I0110 10:07:21.324274  521204 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.275726279s)
	I0110 10:07:21.324313  521204 node_ready.go:35] waiting up to 6m0s for node "embed-certs-219333" to be "Ready" ...
	I0110 10:07:21.343800  521204 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.934117774s)
	I0110 10:07:21.347372  521204 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-219333 addons enable metrics-server
	
	I0110 10:07:21.350573  521204 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I0110 10:07:21.354852  521204 addons.go:530] duration metric: took 4.798974798s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I0110 10:07:21.369264  521204 node_ready.go:49] node "embed-certs-219333" is "Ready"
	I0110 10:07:21.369299  521204 node_ready.go:38] duration metric: took 44.970123ms for node "embed-certs-219333" to be "Ready" ...
	I0110 10:07:21.369313  521204 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:07:21.369372  521204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:07:21.392080  521204 api_server.go:72] duration metric: took 4.835889442s to wait for apiserver process to appear ...
	I0110 10:07:21.392159  521204 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:07:21.392193  521204 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:07:21.405853  521204 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:07:21.405880  521204 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:07:21.892320  521204 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:07:21.900489  521204 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:07:21.901726  521204 api_server.go:141] control plane version: v1.35.0
	I0110 10:07:21.901755  521204 api_server.go:131] duration metric: took 509.577197ms to wait for apiserver health ...
	I0110 10:07:21.901765  521204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:07:21.905109  521204 system_pods.go:59] 8 kube-system pods found
	I0110 10:07:21.905152  521204 system_pods.go:61] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:21.905162  521204 system_pods.go:61] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:07:21.905168  521204 system_pods.go:61] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:07:21.905175  521204 system_pods.go:61] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:21.905192  521204 system_pods.go:61] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:07:21.905198  521204 system_pods.go:61] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:07:21.905211  521204 system_pods.go:61] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:07:21.905215  521204 system_pods.go:61] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Running
	I0110 10:07:21.905221  521204 system_pods.go:74] duration metric: took 3.45108ms to wait for pod list to return data ...
	I0110 10:07:21.905233  521204 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:07:21.910707  521204 default_sa.go:45] found service account: "default"
	I0110 10:07:21.910736  521204 default_sa.go:55] duration metric: took 5.497251ms for default service account to be created ...
	I0110 10:07:21.910747  521204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:07:21.913826  521204 system_pods.go:86] 8 kube-system pods found
	I0110 10:07:21.913859  521204 system_pods.go:89] "coredns-7d764666f9-ct6xj" [7202fe21-4df1-4fd6-aeab-b78de21d43f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:21.913897  521204 system_pods.go:89] "etcd-embed-certs-219333" [62a8ccba-8b23-4f61-a0ca-1295a9af29c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:07:21.913914  521204 system_pods.go:89] "kindnet-px8l8" [b918c202-c46b-4271-a53d-c2f3e0597f24] Running
	I0110 10:07:21.913922  521204 system_pods.go:89] "kube-apiserver-embed-certs-219333" [5a4b1d91-90be-42e6-868c-48743554bf8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:21.913933  521204 system_pods.go:89] "kube-controller-manager-embed-certs-219333" [07fb83cf-da8a-489c-8397-b2347fd52566] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:07:21.913942  521204 system_pods.go:89] "kube-proxy-gplbn" [b42edc75-1624-420a-80b3-4472f2766114] Running
	I0110 10:07:21.913949  521204 system_pods.go:89] "kube-scheduler-embed-certs-219333" [0b9927f9-f136-41b9-9f37-e14f60b6ba8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:07:21.913971  521204 system_pods.go:89] "storage-provisioner" [ef23f9bc-2e08-4b78-8b1c-01cec8e469f1] Running
	I0110 10:07:21.913987  521204 system_pods.go:126] duration metric: took 3.234331ms to wait for k8s-apps to be running ...
	I0110 10:07:21.914005  521204 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:07:21.914084  521204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:07:21.937720  521204 system_svc.go:56] duration metric: took 23.713081ms WaitForService to wait for kubelet
	I0110 10:07:21.937753  521204 kubeadm.go:587] duration metric: took 5.381580821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:21.937772  521204 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:07:21.951232  521204 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:07:21.951268  521204 node_conditions.go:123] node cpu capacity is 2
	I0110 10:07:21.951281  521204 node_conditions.go:105] duration metric: took 13.474904ms to run NodePressure ...
	I0110 10:07:21.951320  521204 start.go:242] waiting for startup goroutines ...
	I0110 10:07:21.951338  521204 start.go:247] waiting for cluster config update ...
	I0110 10:07:21.951350  521204 start.go:256] writing updated cluster config ...
	I0110 10:07:21.951654  521204 ssh_runner.go:195] Run: rm -f paused
	I0110 10:07:21.955493  521204 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:07:21.961582  521204 pod_ready.go:83] waiting for pod "coredns-7d764666f9-ct6xj" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> CRI-O <==
	Jan 10 10:07:11 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:11.697108943Z" level=info msg="Created container a4cd9e9b3172be7762daf2b72238bb074ead8d0293a740856fd71c9a5af1684d: kube-system/coredns-7d764666f9-5kgtf/coredns" id=6fd41606-f36c-45ba-8173-b67835c23fe9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:07:11 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:11.698136829Z" level=info msg="Starting container: a4cd9e9b3172be7762daf2b72238bb074ead8d0293a740856fd71c9a5af1684d" id=300e9794-23fe-48ba-b3dd-420563568e51 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:07:11 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:11.69967691Z" level=info msg="Started container" PID=1776 containerID=a4cd9e9b3172be7762daf2b72238bb074ead8d0293a740856fd71c9a5af1684d description=kube-system/coredns-7d764666f9-5kgtf/coredns id=300e9794-23fe-48ba-b3dd-420563568e51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=929582f691cd516cd095658b900b93b6b92278b08220f2dc9893828715d9585b
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.548307833Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e9627369-268d-4ea6-9a0d-f0693c00d823 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.548417956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.554942691Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dd74b8dab4860464da4dc1f6f6b84b1b7b9357208dc5b1e23e30893916c55f14 UID:bfb7e017-ab95-4a49-b9a3-f277223dc9f8 NetNS:/var/run/netns/5362578d-ed11-4680-9b35-2a891b71efb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000145d40}] Aliases:map[]}"
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.554981724Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.572533676Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:dd74b8dab4860464da4dc1f6f6b84b1b7b9357208dc5b1e23e30893916c55f14 UID:bfb7e017-ab95-4a49-b9a3-f277223dc9f8 NetNS:/var/run/netns/5362578d-ed11-4680-9b35-2a891b71efb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000145d40}] Aliases:map[]}"
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.572862698Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.579619313Z" level=info msg="Ran pod sandbox dd74b8dab4860464da4dc1f6f6b84b1b7b9357208dc5b1e23e30893916c55f14 with infra container: default/busybox/POD" id=e9627369-268d-4ea6-9a0d-f0693c00d823 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.582467128Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d096275c-45e2-492f-9072-e19fb6bd609c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.582601956Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d096275c-45e2-492f-9072-e19fb6bd609c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.582678576Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d096275c-45e2-492f-9072-e19fb6bd609c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.58460295Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c6efc86e-e9e6-4568-a01f-fdec598231b4 name=/runtime.v1.ImageService/PullImage
	Jan 10 10:07:14 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:14.585008206Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.871772787Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=c6efc86e-e9e6-4568-a01f-fdec598231b4 name=/runtime.v1.ImageService/PullImage
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.878363091Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e7692c89-92fd-44d1-a0c7-4eea377229f6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.882668089Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=89f401b0-1c96-4533-b368-89dd945264cf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.893670502Z" level=info msg="Creating container: default/busybox/busybox" id=2da15058-49b1-4a38-aaf2-6446cfc15231 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.893801449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.906792457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.907461332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.939025352Z" level=info msg="Created container 0a57349074669b845655d6d996637b222e9492aed081d8002b8d2e32e7c12417: default/busybox/busybox" id=2da15058-49b1-4a38-aaf2-6446cfc15231 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.942139508Z" level=info msg="Starting container: 0a57349074669b845655d6d996637b222e9492aed081d8002b8d2e32e7c12417" id=be72b0e1-22cb-4d39-a2d0-adb92dc938fd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:07:16 default-k8s-diff-port-820203 crio[839]: time="2026-01-10T10:07:16.949038944Z" level=info msg="Started container" PID=1836 containerID=0a57349074669b845655d6d996637b222e9492aed081d8002b8d2e32e7c12417 description=default/busybox/busybox id=be72b0e1-22cb-4d39-a2d0-adb92dc938fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd74b8dab4860464da4dc1f6f6b84b1b7b9357208dc5b1e23e30893916c55f14
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	0a57349074669       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   dd74b8dab4860       busybox                                                default
	a4cd9e9b3172b       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   929582f691cd5       coredns-7d764666f9-5kgtf                               kube-system
	c8162b56e4d16       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   ee2c57b8671ce       storage-provisioner                                    kube-system
	5e1d9fd854bbe       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   ca51091f21eb7       kindnet-kg5mk                                          kube-system
	79034ffa35399       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      26 seconds ago      Running             kube-proxy                0                   af9f8fbdbc38f       kube-proxy-h677z                                       kube-system
	5e504aea5b1c0       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      37 seconds ago      Running             kube-apiserver            0                   80ed3b2fdca4a       kube-apiserver-default-k8s-diff-port-820203            kube-system
	809b93f643242       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      37 seconds ago      Running             kube-controller-manager   0                   854fd9004fd97       kube-controller-manager-default-k8s-diff-port-820203   kube-system
	67c8edcf791b7       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      37 seconds ago      Running             etcd                      0                   edd07a7a7a58a       etcd-default-k8s-diff-port-820203                      kube-system
	34432bc79b6af       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      37 seconds ago      Running             kube-scheduler            0                   75594eccd5521       kube-scheduler-default-k8s-diff-port-820203            kube-system
	
	
	==> coredns [a4cd9e9b3172be7762daf2b72238bb074ead8d0293a740856fd71c9a5af1684d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:51998 - 46481 "HINFO IN 8005058782736920426.690940512497197798. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023561711s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-820203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-820203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=default-k8s-diff-port-820203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_06_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-820203
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:07:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:07:24 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:07:24 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:07:24 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:07:24 +0000   Sat, 10 Jan 2026 10:07:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-820203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                3458310f-51b7-4cba-9b86-ae28b618509b
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-5kgtf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-default-k8s-diff-port-820203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-kg5mk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-820203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-820203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-h677z                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-820203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node default-k8s-diff-port-820203 event: Registered Node default-k8s-diff-port-820203 in Controller
	
	
	==> dmesg <==
	[Jan10 09:36] overlayfs: idmapped layers are currently not supported
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [67c8edcf791b7aa0267ea1058f3b45d5e99a5343b914cadfcd7072da7567fbcf] <==
	{"level":"info","ts":"2026-01-10T10:06:47.572690Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:06:48.023401Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T10:06:48.023514Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T10:06:48.023592Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-10T10:06:48.023646Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:06:48.023717Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:48.026747Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:48.026839Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:06:48.026887Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:48.026938Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T10:06:48.030730Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-820203 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:06:48.030977Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:48.031108Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:06:48.031253Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:06:48.032096Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:06:48.032159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:06:48.032255Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:48.032404Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:48.032480Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:06:48.032577Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T10:06:48.032710Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T10:06:48.033457Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:06:48.035564Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T10:06:48.036632Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:06:48.040218Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:07:25 up  2:49,  0 user,  load average: 2.74, 1.86, 1.96
	Linux default-k8s-diff-port-820203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e1d9fd854bbeb36beeb5c633ac2a2863aaa9349b976f29e9bc5e6c33bdaf7cc] <==
	I0110 10:07:00.724972       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:07:00.725381       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 10:07:00.725505       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:07:00.816634       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:07:00.816669       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:07:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:07:00.931713       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:07:01.017510       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:07:01.017624       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:07:01.019686       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 10:07:01.217804       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:07:01.219233       1 metrics.go:72] Registering metrics
	I0110 10:07:01.219498       1 controller.go:711] "Syncing nftables rules"
	I0110 10:07:10.931739       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 10:07:10.931831       1 main.go:301] handling current node
	I0110 10:07:20.932608       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 10:07:20.932751       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e504aea5b1c08edcf330909a2a43c814e27ef6e49bd8429a5ab7e8909d77ff7] <==
	E0110 10:06:50.322061       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0110 10:06:50.333487       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:06:50.343119       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:06:50.353259       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:50.365828       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:50.366313       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:06:50.526076       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:06:51.035960       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 10:06:51.042922       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 10:06:51.042947       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:06:51.782638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:06:51.836563       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:06:51.942974       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 10:06:51.954110       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 10:06:51.955308       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:06:51.972347       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:06:52.138003       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:06:52.811748       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:06:52.867067       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 10:06:52.886461       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 10:06:57.642673       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 10:06:57.847370       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:06:58.016168       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:06:58.049536       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0110 10:07:23.375512       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48018: use of closed network connection
	
	
	==> kube-controller-manager [809b93f643242d6cd3d6e7dd5e6d75bdc09b41a8023afc2e38f2c25b35f9440d] <==
	I0110 10:06:56.960905       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.960928       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.960948       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.960986       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.961022       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.961145       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.961356       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.961431       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 10:06:56.961486       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-820203"
	I0110 10:06:56.961538       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 10:06:56.961560       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.962572       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.962666       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.962724       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.962772       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.962811       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.962848       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.960669       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.972654       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:56.981115       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-820203" podCIDRs=["10.244.0.0/24"]
	I0110 10:06:57.050487       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:57.052778       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:57.052800       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:06:57.052805       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:07:11.965406       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [79034ffa3539971d37572a6c6c55548b8cf6dbd810ba350fc7d93b2a7843762c] <==
	I0110 10:06:58.417989       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:06:58.506152       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:06:58.606771       1 shared_informer.go:377] "Caches are synced"
	I0110 10:06:58.606820       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 10:06:58.606907       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:06:58.687097       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:06:58.687153       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:06:58.700422       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:06:58.700739       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:06:58.700755       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:06:58.705131       1 config.go:200] "Starting service config controller"
	I0110 10:06:58.705151       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:06:58.714486       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:06:58.714503       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:06:58.714534       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:06:58.714538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:06:58.715413       1 config.go:309] "Starting node config controller"
	I0110 10:06:58.715422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:06:58.715429       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:06:58.806854       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:06:58.815038       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:06:58.815043       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [34432bc79b6affc6ed52488f2ec1c4f0fe604a3153f99ee6b4521ad8377f3164] <==
	E0110 10:06:50.317230       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:06:50.317509       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 10:06:50.317486       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 10:06:50.317686       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 10:06:50.317763       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 10:06:50.317924       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 10:06:50.318030       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 10:06:50.318115       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 10:06:50.318213       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 10:06:50.318267       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 10:06:50.318307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 10:06:50.318347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 10:06:50.318388       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 10:06:50.318442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 10:06:50.318636       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 10:06:51.209787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 10:06:51.253648       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 10:06:51.271701       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 10:06:51.305049       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 10:06:51.340992       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 10:06:51.425342       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 10:06:51.435555       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:06:51.506811       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 10:06:51.799386       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I0110 10:06:55.030945       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:06:57 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:06:57.702129    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7dc7e83-f97e-4c19-800c-5882ff43b0f9-lib-modules\") pod \"kube-proxy-h677z\" (UID: \"d7dc7e83-f97e-4c19-800c-5882ff43b0f9\") " pod="kube-system/kube-proxy-h677z"
	Jan 10 10:06:57 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:06:57.702149    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/37256b6f-f68a-4674-a9b8-9985a45a1469-cni-cfg\") pod \"kindnet-kg5mk\" (UID: \"37256b6f-f68a-4674-a9b8-9985a45a1469\") " pod="kube-system/kindnet-kg5mk"
	Jan 10 10:06:57 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:06:57.702183    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8grb\" (UniqueName: \"kubernetes.io/projected/37256b6f-f68a-4674-a9b8-9985a45a1469-kube-api-access-j8grb\") pod \"kindnet-kg5mk\" (UID: \"37256b6f-f68a-4674-a9b8-9985a45a1469\") " pod="kube-system/kindnet-kg5mk"
	Jan 10 10:06:57 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:06:57.829161    1300 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 10:06:58 default-k8s-diff-port-820203 kubelet[1300]: W0110 10:06:58.029610    1300 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/crio-ca51091f21eb793a14a6f465d19577071e2f9cbc311b240e741c5a8c4a04ceed WatchSource:0}: Error finding container ca51091f21eb793a14a6f465d19577071e2f9cbc311b240e741c5a8c4a04ceed: Status 404 returned error can't find the container with id ca51091f21eb793a14a6f465d19577071e2f9cbc311b240e741c5a8c4a04ceed
	Jan 10 10:06:58 default-k8s-diff-port-820203 kubelet[1300]: W0110 10:06:58.087203    1300 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/crio-af9f8fbdbc38f46f3fa6794cfb1d453a82127f044a7d54b584af9ee31eea8718 WatchSource:0}: Error finding container af9f8fbdbc38f46f3fa6794cfb1d453a82127f044a7d54b584af9ee31eea8718: Status 404 returned error can't find the container with id af9f8fbdbc38f46f3fa6794cfb1d453a82127f044a7d54b584af9ee31eea8718
	Jan 10 10:07:00 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:00.525680    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-820203" containerName="kube-controller-manager"
	Jan 10 10:07:00 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:00.543199    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-h677z" podStartSLOduration=3.5431830189999998 podStartE2EDuration="3.543183019s" podCreationTimestamp="2026-01-10 10:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:06:58.903505486 +0000 UTC m=+6.286793108" watchObservedRunningTime="2026-01-10 10:07:00.543183019 +0000 UTC m=+7.926470633"
	Jan 10 10:07:01 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:01.930805    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-820203" containerName="kube-apiserver"
	Jan 10 10:07:01 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:01.946571    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-kg5mk" podStartSLOduration=2.391245386 podStartE2EDuration="4.946554217s" podCreationTimestamp="2026-01-10 10:06:57 +0000 UTC" firstStartedPulling="2026-01-10 10:06:58.06618945 +0000 UTC m=+5.449477072" lastFinishedPulling="2026-01-10 10:07:00.621498289 +0000 UTC m=+8.004785903" observedRunningTime="2026-01-10 10:07:00.914225994 +0000 UTC m=+8.297513608" watchObservedRunningTime="2026-01-10 10:07:01.946554217 +0000 UTC m=+9.329841839"
	Jan 10 10:07:05 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:05.036400    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-820203" containerName="etcd"
	Jan 10 10:07:06 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:06.135414    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-820203" containerName="kube-scheduler"
	Jan 10 10:07:10 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:10.534975    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-820203" containerName="kube-controller-manager"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:11.235708    1300 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:11.429120    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/988b2cb8-be15-4bee-bc89-382c038a9348-tmp\") pod \"storage-provisioner\" (UID: \"988b2cb8-be15-4bee-bc89-382c038a9348\") " pod="kube-system/storage-provisioner"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:11.429402    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdqk4\" (UniqueName: \"kubernetes.io/projected/988b2cb8-be15-4bee-bc89-382c038a9348-kube-api-access-xdqk4\") pod \"storage-provisioner\" (UID: \"988b2cb8-be15-4bee-bc89-382c038a9348\") " pod="kube-system/storage-provisioner"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:11.429508    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e03146c-d6d6-402a-8a86-8558a61c293a-config-volume\") pod \"coredns-7d764666f9-5kgtf\" (UID: \"9e03146c-d6d6-402a-8a86-8558a61c293a\") " pod="kube-system/coredns-7d764666f9-5kgtf"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:11.429546    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpw87\" (UniqueName: \"kubernetes.io/projected/9e03146c-d6d6-402a-8a86-8558a61c293a-kube-api-access-gpw87\") pod \"coredns-7d764666f9-5kgtf\" (UID: \"9e03146c-d6d6-402a-8a86-8558a61c293a\") " pod="kube-system/coredns-7d764666f9-5kgtf"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:11.923308    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5kgtf" containerName="coredns"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:11.949656    1300 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-820203" containerName="kube-apiserver"
	Jan 10 10:07:11 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:11.979112    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5kgtf" podStartSLOduration=13.979096448 podStartE2EDuration="13.979096448s" podCreationTimestamp="2026-01-10 10:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:07:11.950784138 +0000 UTC m=+19.334071751" watchObservedRunningTime="2026-01-10 10:07:11.979096448 +0000 UTC m=+19.362384062"
	Jan 10 10:07:12 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:12.029506    1300 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.029487408 podStartE2EDuration="13.029487408s" podCreationTimestamp="2026-01-10 10:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:07:11.996185363 +0000 UTC m=+19.379472985" watchObservedRunningTime="2026-01-10 10:07:12.029487408 +0000 UTC m=+19.412775030"
	Jan 10 10:07:12 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:12.927587    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5kgtf" containerName="coredns"
	Jan 10 10:07:13 default-k8s-diff-port-820203 kubelet[1300]: E0110 10:07:13.929615    1300 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5kgtf" containerName="coredns"
	Jan 10 10:07:14 default-k8s-diff-port-820203 kubelet[1300]: I0110 10:07:14.349659    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlswx\" (UniqueName: \"kubernetes.io/projected/bfb7e017-ab95-4a49-b9a3-f277223dc9f8-kube-api-access-jlswx\") pod \"busybox\" (UID: \"bfb7e017-ab95-4a49-b9a3-f277223dc9f8\") " pod="default/busybox"
	
	
	==> storage-provisioner [c8162b56e4d16e2bf6f03af0987a595784bbee3af98f5acb357cb402fe28da60] <==
	I0110 10:07:11.651075       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:07:11.667498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:07:11.667545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:07:11.677016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:11.687759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:07:11.687919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:07:11.688698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-820203_dc87150d-19b9-4f26-89c6-9471abbae5d1!
	I0110 10:07:11.688965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e73b981a-6c80-4d85-b5f4-5190b80286fa", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-820203_dc87150d-19b9-4f26-89c6-9471abbae5d1 became leader
	W0110 10:07:11.694928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:11.706293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:07:11.788877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-820203_dc87150d-19b9-4f26-89c6-9471abbae5d1!
	W0110 10:07:13.709979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:13.717617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:15.720913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:15.730344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:17.733632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:17.757591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:19.761323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:19.768092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:21.771384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:21.775838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:23.782874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:23.795163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-219333 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-219333 --alsologtostderr -v=1: exit status 80 (1.834727448s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-219333 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 10:08:12.436622  526478 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:08:12.436787  526478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:12.436800  526478 out.go:374] Setting ErrFile to fd 2...
	I0110 10:08:12.436806  526478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:12.437114  526478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:08:12.437424  526478 out.go:368] Setting JSON to false
	I0110 10:08:12.437452  526478 mustload.go:66] Loading cluster: embed-certs-219333
	I0110 10:08:12.437910  526478 config.go:182] Loaded profile config "embed-certs-219333": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:12.439102  526478 cli_runner.go:164] Run: docker container inspect embed-certs-219333 --format={{.State.Status}}
	I0110 10:08:12.456271  526478 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:08:12.456689  526478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:12.532290  526478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 10:08:12.521931396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:12.533013  526478 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-219333 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 10:08:12.536415  526478 out.go:179] * Pausing node embed-certs-219333 ... 
	I0110 10:08:12.540176  526478 host.go:66] Checking if "embed-certs-219333" exists ...
	I0110 10:08:12.540586  526478 ssh_runner.go:195] Run: systemctl --version
	I0110 10:08:12.540651  526478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-219333
	I0110 10:08:12.557446  526478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33449 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/embed-certs-219333/id_rsa Username:docker}
	I0110 10:08:12.659446  526478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:08:12.679963  526478 pause.go:52] kubelet running: true
	I0110 10:08:12.680035  526478 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:08:12.912042  526478 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:08:12.912134  526478 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:08:12.988172  526478 cri.go:96] found id: "5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe"
	I0110 10:08:12.988203  526478 cri.go:96] found id: "275901198dfed0ab5abf24fc0360d62641f7d4317960467dfe338bdcdf590319"
	I0110 10:08:12.988208  526478 cri.go:96] found id: "f1a8a35556f782af9945790a1d2ff1e2bbda5bb31002b8141b6ce3a4fa1c5845"
	I0110 10:08:12.988212  526478 cri.go:96] found id: "02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee"
	I0110 10:08:12.988216  526478 cri.go:96] found id: "a259a4eaa5cdd0e3daddb79fd0994ee011e36fc39d0c7e6328c070219bb7520b"
	I0110 10:08:12.988220  526478 cri.go:96] found id: "d60025e9eaf7adb52700e8aca2a8a63d05e321eb59e4e696674205d1577f88e6"
	I0110 10:08:12.988224  526478 cri.go:96] found id: "23d9f7d67b99820f29a228986440deb42a7643b108034bd10629d2cd7e74d814"
	I0110 10:08:12.988227  526478 cri.go:96] found id: "34471ac06a0868183f7bbf12a60eede49ca6265f4f8b78f35058634a2296e139"
	I0110 10:08:12.988230  526478 cri.go:96] found id: "cd78f3af49f4d143a1ec414506ec5513f9eff8215806fa0cf31e02e797a439b2"
	I0110 10:08:12.988240  526478 cri.go:96] found id: "1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82"
	I0110 10:08:12.988244  526478 cri.go:96] found id: "2b627d4e4087c16b83689a421c12c3fdc4bd39321c0bcfefeb33bbe33ccfbcbd"
	I0110 10:08:12.988248  526478 cri.go:96] found id: ""
	I0110 10:08:12.988296  526478 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:08:13.008843  526478 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:13Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:08:13.319191  526478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:08:13.332657  526478 pause.go:52] kubelet running: false
	I0110 10:08:13.332722  526478 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:08:13.530038  526478 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:08:13.530116  526478 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:08:13.597689  526478 cri.go:96] found id: "5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe"
	I0110 10:08:13.597711  526478 cri.go:96] found id: "275901198dfed0ab5abf24fc0360d62641f7d4317960467dfe338bdcdf590319"
	I0110 10:08:13.597720  526478 cri.go:96] found id: "f1a8a35556f782af9945790a1d2ff1e2bbda5bb31002b8141b6ce3a4fa1c5845"
	I0110 10:08:13.597724  526478 cri.go:96] found id: "02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee"
	I0110 10:08:13.597728  526478 cri.go:96] found id: "a259a4eaa5cdd0e3daddb79fd0994ee011e36fc39d0c7e6328c070219bb7520b"
	I0110 10:08:13.597732  526478 cri.go:96] found id: "d60025e9eaf7adb52700e8aca2a8a63d05e321eb59e4e696674205d1577f88e6"
	I0110 10:08:13.597735  526478 cri.go:96] found id: "23d9f7d67b99820f29a228986440deb42a7643b108034bd10629d2cd7e74d814"
	I0110 10:08:13.597739  526478 cri.go:96] found id: "34471ac06a0868183f7bbf12a60eede49ca6265f4f8b78f35058634a2296e139"
	I0110 10:08:13.597742  526478 cri.go:96] found id: "cd78f3af49f4d143a1ec414506ec5513f9eff8215806fa0cf31e02e797a439b2"
	I0110 10:08:13.597747  526478 cri.go:96] found id: "1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82"
	I0110 10:08:13.597751  526478 cri.go:96] found id: "2b627d4e4087c16b83689a421c12c3fdc4bd39321c0bcfefeb33bbe33ccfbcbd"
	I0110 10:08:13.597754  526478 cri.go:96] found id: ""
	I0110 10:08:13.597802  526478 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:08:13.934758  526478 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:08:13.948153  526478 pause.go:52] kubelet running: false
	I0110 10:08:13.948288  526478 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:08:14.115737  526478 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:08:14.115902  526478 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:08:14.187837  526478 cri.go:96] found id: "5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe"
	I0110 10:08:14.187861  526478 cri.go:96] found id: "275901198dfed0ab5abf24fc0360d62641f7d4317960467dfe338bdcdf590319"
	I0110 10:08:14.187867  526478 cri.go:96] found id: "f1a8a35556f782af9945790a1d2ff1e2bbda5bb31002b8141b6ce3a4fa1c5845"
	I0110 10:08:14.187871  526478 cri.go:96] found id: "02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee"
	I0110 10:08:14.187874  526478 cri.go:96] found id: "a259a4eaa5cdd0e3daddb79fd0994ee011e36fc39d0c7e6328c070219bb7520b"
	I0110 10:08:14.187878  526478 cri.go:96] found id: "d60025e9eaf7adb52700e8aca2a8a63d05e321eb59e4e696674205d1577f88e6"
	I0110 10:08:14.187881  526478 cri.go:96] found id: "23d9f7d67b99820f29a228986440deb42a7643b108034bd10629d2cd7e74d814"
	I0110 10:08:14.187884  526478 cri.go:96] found id: "34471ac06a0868183f7bbf12a60eede49ca6265f4f8b78f35058634a2296e139"
	I0110 10:08:14.187887  526478 cri.go:96] found id: "cd78f3af49f4d143a1ec414506ec5513f9eff8215806fa0cf31e02e797a439b2"
	I0110 10:08:14.187893  526478 cri.go:96] found id: "1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82"
	I0110 10:08:14.187897  526478 cri.go:96] found id: "2b627d4e4087c16b83689a421c12c3fdc4bd39321c0bcfefeb33bbe33ccfbcbd"
	I0110 10:08:14.187900  526478 cri.go:96] found id: ""
	I0110 10:08:14.187949  526478 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:08:14.202114  526478 out.go:203] 
	W0110 10:08:14.204977  526478 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 10:08:14.205002  526478 out.go:285] * 
	* 
	W0110 10:08:14.208929  526478 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:08:14.212736  526478 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-219333 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-219333
helpers_test.go:244: (dbg) docker inspect embed-certs-219333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51",
	        "Created": "2026-01-10T10:06:01.259250049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 521328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:07:08.535648616Z",
	            "FinishedAt": "2026-01-10T10:07:07.612219537Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/hostname",
	        "HostsPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/hosts",
	        "LogPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51-json.log",
	        "Name": "/embed-certs-219333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-219333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-219333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51",
	                "LowerDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/merged",
	                "UpperDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/diff",
	                "WorkDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-219333",
	                "Source": "/var/lib/docker/volumes/embed-certs-219333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-219333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-219333",
	                "name.minikube.sigs.k8s.io": "embed-certs-219333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9396cfae0d1094ece603c968d06179a28de8a026bf5910df569afb982a624c5",
	            "SandboxKey": "/var/run/docker/netns/a9396cfae0d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-219333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:8b:83:d5:dd:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d1e980d25c729b4e5350b1ccfb2f436b31893785314b40506467e9431269ca0",
	                    "EndpointID": "e72032122ab56d42e7caaa4fb6d93c9b2ce2798cb7f554deb37eab3523ecaa14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-219333",
	                        "11d72dc06eff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333: exit status 2 (348.467603ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-219333 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-219333 logs -n 25: (1.422755169s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                              │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                               │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                              │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                     │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                     │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                             │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                          │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ stop    │ -p embed-certs-219333 --alsologtostderr -v=3                                                                                                                             │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                              │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                             │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:07:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:07:39.332488  524195 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:07:39.332700  524195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:07:39.332712  524195 out.go:374] Setting ErrFile to fd 2...
	I0110 10:07:39.332718  524195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:07:39.333117  524195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:07:39.333604  524195 out.go:368] Setting JSON to false
	I0110 10:07:39.334638  524195 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10209,"bootTime":1768029451,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:07:39.334730  524195 start.go:143] virtualization:  
	I0110 10:07:39.337876  524195 out.go:179] * [default-k8s-diff-port-820203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:07:39.341769  524195 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:07:39.341812  524195 notify.go:221] Checking for updates...
	I0110 10:07:39.347657  524195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:07:39.350629  524195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:39.353532  524195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:07:39.356364  524195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:07:39.359120  524195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:07:39.362503  524195 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:39.363056  524195 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:07:39.390536  524195 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:07:39.390666  524195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:07:39.453784  524195 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:07:39.444669604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:07:39.453884  524195 docker.go:319] overlay module found
	I0110 10:07:39.456998  524195 out.go:179] * Using the docker driver based on existing profile
	I0110 10:07:39.459868  524195 start.go:309] selected driver: docker
	I0110 10:07:39.459886  524195 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:39.459989  524195 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:07:39.460744  524195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:07:39.511105  524195 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:07:39.501483651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:07:39.511548  524195 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:39.511588  524195 cni.go:84] Creating CNI manager for ""
	I0110 10:07:39.511639  524195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:07:39.511683  524195 start.go:353] cluster config:
	{Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:39.514790  524195 out.go:179] * Starting "default-k8s-diff-port-820203" primary control-plane node in "default-k8s-diff-port-820203" cluster
	I0110 10:07:39.517665  524195 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:07:39.520459  524195 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:07:39.523232  524195 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:07:39.523281  524195 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:07:39.523291  524195 cache.go:65] Caching tarball of preloaded images
	I0110 10:07:39.523339  524195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:07:39.523386  524195 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:07:39.523396  524195 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:07:39.523501  524195 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json ...
	I0110 10:07:39.543249  524195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:07:39.543272  524195 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:07:39.543287  524195 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:07:39.543318  524195 start.go:360] acquireMachinesLock for default-k8s-diff-port-820203: {Name:mkaca248efde78a9e4798a5020ca02bdd83351f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:07:39.543376  524195 start.go:364] duration metric: took 35.734µs to acquireMachinesLock for "default-k8s-diff-port-820203"
	I0110 10:07:39.543408  524195 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:07:39.543417  524195 fix.go:54] fixHost starting: 
	I0110 10:07:39.543676  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:39.560306  524195 fix.go:112] recreateIfNeeded on default-k8s-diff-port-820203: state=Stopped err=<nil>
	W0110 10:07:39.560341  524195 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 10:07:38.467572  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:40.967167  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:42.968543  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	I0110 10:07:39.563568  524195 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-820203" ...
	I0110 10:07:39.563657  524195 cli_runner.go:164] Run: docker start default-k8s-diff-port-820203
	I0110 10:07:39.824821  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:39.843626  524195 kic.go:430] container "default-k8s-diff-port-820203" state is running.
	I0110 10:07:39.844024  524195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:07:39.871661  524195 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json ...
	I0110 10:07:39.871899  524195 machine.go:94] provisionDockerMachine start ...
	I0110 10:07:39.872537  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:39.900673  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:39.901363  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:39.901379  524195 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:07:39.902024  524195 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:07:43.064450  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-820203
	
	I0110 10:07:43.064475  524195 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-820203"
	I0110 10:07:43.064559  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:43.082455  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:43.082807  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:43.082826  524195 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-820203 && echo "default-k8s-diff-port-820203" | sudo tee /etc/hostname
	I0110 10:07:43.242155  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-820203
	
	I0110 10:07:43.242279  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:43.260356  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:43.260734  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:43.260752  524195 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-820203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-820203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-820203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:07:43.409166  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:07:43.409192  524195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:07:43.409237  524195 ubuntu.go:190] setting up certificates
	I0110 10:07:43.409252  524195 provision.go:84] configureAuth start
	I0110 10:07:43.409326  524195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:07:43.428147  524195 provision.go:143] copyHostCerts
	I0110 10:07:43.428222  524195 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:07:43.428243  524195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:07:43.428327  524195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:07:43.428675  524195 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:07:43.428690  524195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:07:43.428733  524195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:07:43.428810  524195 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:07:43.428820  524195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:07:43.428848  524195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:07:43.428904  524195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-820203 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-820203 localhost minikube]
	I0110 10:07:44.116522  524195 provision.go:177] copyRemoteCerts
	I0110 10:07:44.116594  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:07:44.116638  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.137350  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.240350  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:07:44.257931  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 10:07:44.275750  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 10:07:44.294014  524195 provision.go:87] duration metric: took 884.74014ms to configureAuth
	I0110 10:07:44.294043  524195 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:07:44.294264  524195 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:44.294411  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.312921  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:44.313236  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:44.313259  524195 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:07:44.654373  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:07:44.654394  524195 machine.go:97] duration metric: took 4.782481319s to provisionDockerMachine
	I0110 10:07:44.654404  524195 start.go:293] postStartSetup for "default-k8s-diff-port-820203" (driver="docker")
	I0110 10:07:44.654420  524195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:07:44.654494  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:07:44.654531  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.674774  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.780741  524195 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:07:44.784105  524195 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:07:44.784136  524195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:07:44.784149  524195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:07:44.784214  524195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:07:44.784294  524195 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:07:44.784402  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:07:44.792208  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:07:44.810298  524195 start.go:296] duration metric: took 155.873973ms for postStartSetup
	I0110 10:07:44.810449  524195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:07:44.810548  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.829335  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.935654  524195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:07:44.940841  524195 fix.go:56] duration metric: took 5.397418022s for fixHost
	I0110 10:07:44.940878  524195 start.go:83] releasing machines lock for "default-k8s-diff-port-820203", held for 5.397476049s
	I0110 10:07:44.940983  524195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:07:44.957856  524195 ssh_runner.go:195] Run: cat /version.json
	I0110 10:07:44.957911  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.958180  524195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:07:44.958252  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.983893  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.984806  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:45.249263  524195 ssh_runner.go:195] Run: systemctl --version
	I0110 10:07:45.257466  524195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:07:45.307158  524195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:07:45.314578  524195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:07:45.314722  524195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:07:45.323169  524195 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:07:45.323196  524195 start.go:496] detecting cgroup driver to use...
	I0110 10:07:45.323259  524195 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:07:45.323324  524195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:07:45.339496  524195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:07:45.353091  524195 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:07:45.353188  524195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:07:45.369377  524195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:07:45.383131  524195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:07:45.513443  524195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:07:45.634131  524195 docker.go:234] disabling docker service ...
	I0110 10:07:45.634201  524195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:07:45.652435  524195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:07:45.667355  524195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:07:45.792617  524195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:07:45.908758  524195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:07:45.923400  524195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:07:45.942454  524195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:07:45.942534  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.952701  524195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:07:45.952786  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.964902  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.976215  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.986196  524195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:07:45.994628  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:46.006861  524195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:46.018719  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:46.030101  524195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:07:46.038690  524195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:07:46.047488  524195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:46.172945  524195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:07:46.343439  524195 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:07:46.343598  524195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:07:46.347514  524195 start.go:574] Will wait 60s for crictl version
	I0110 10:07:46.347623  524195 ssh_runner.go:195] Run: which crictl
	I0110 10:07:46.351340  524195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:07:46.375416  524195 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:07:46.375569  524195 ssh_runner.go:195] Run: crio --version
	I0110 10:07:46.407314  524195 ssh_runner.go:195] Run: crio --version
	I0110 10:07:46.442766  524195 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:07:46.445631  524195 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-820203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:07:46.461271  524195 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 10:07:46.465532  524195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:07:46.477495  524195 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:07:46.477623  524195 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:07:46.477696  524195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:07:46.518374  524195 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:07:46.518401  524195 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:07:46.518458  524195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:07:46.545794  524195 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:07:46.545816  524195 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:07:46.545825  524195 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 10:07:46.545914  524195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-820203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:07:46.545995  524195 ssh_runner.go:195] Run: crio config
	I0110 10:07:46.627783  524195 cni.go:84] Creating CNI manager for ""
	I0110 10:07:46.627806  524195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:07:46.627828  524195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:07:46.627852  524195 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-820203 NodeName:default-k8s-diff-port-820203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:07:46.628009  524195 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-820203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:07:46.628108  524195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:07:46.642550  524195 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:07:46.642636  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:07:46.650703  524195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 10:07:46.670426  524195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:07:46.688235  524195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0110 10:07:46.702905  524195 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:07:46.706570  524195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:07:46.717934  524195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:46.839094  524195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:07:46.855566  524195 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203 for IP: 192.168.85.2
	I0110 10:07:46.855588  524195 certs.go:195] generating shared ca certs ...
	I0110 10:07:46.855605  524195 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:46.855739  524195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:07:46.855790  524195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:07:46.855802  524195 certs.go:257] generating profile certs ...
	I0110 10:07:46.855896  524195 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.key
	I0110 10:07:46.855967  524195 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key.15c00bf5
	I0110 10:07:46.856019  524195 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key
	I0110 10:07:46.856131  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:07:46.856167  524195 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:07:46.856178  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:07:46.856205  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:07:46.856235  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:07:46.856260  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:07:46.856316  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:07:46.857228  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:07:46.907620  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:07:46.952109  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:07:46.978879  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:07:47.003982  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 10:07:47.027547  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:07:47.052846  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:07:47.083361  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:07:47.118506  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:07:47.137784  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:07:47.156374  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:07:47.174898  524195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:07:47.190134  524195 ssh_runner.go:195] Run: openssl version
	I0110 10:07:47.196073  524195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.203183  524195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:07:47.210601  524195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.214348  524195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.214455  524195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.255579  524195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:07:47.263384  524195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.270610  524195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:07:47.277921  524195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.281762  524195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.281830  524195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.324305  524195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:07:47.331763  524195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.339155  524195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:07:47.346592  524195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.350484  524195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.350553  524195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.391178  524195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:07:47.398708  524195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:07:47.402455  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:07:47.444330  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:07:47.488328  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:07:47.537615  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:07:47.580711  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:07:47.636127  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:07:47.720619  524195 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:47.720723  524195 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:07:47.720819  524195 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:07:47.780336  524195 cri.go:96] found id: "c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde"
	I0110 10:07:47.780376  524195 cri.go:96] found id: "9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7"
	I0110 10:07:47.780383  524195 cri.go:96] found id: "812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47"
	I0110 10:07:47.780415  524195 cri.go:96] found id: "91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9"
	I0110 10:07:47.780427  524195 cri.go:96] found id: ""
	I0110 10:07:47.780509  524195 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:07:47.797503  524195 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:07:47Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:07:47.797635  524195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:07:47.810436  524195 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:07:47.810457  524195 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:07:47.810550  524195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:07:47.823642  524195 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:07:47.824591  524195 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-820203" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:47.825192  524195 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-820203" cluster setting kubeconfig missing "default-k8s-diff-port-820203" context setting]
	I0110 10:07:47.826193  524195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:47.828083  524195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:07:47.837461  524195 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 10:07:47.837503  524195 kubeadm.go:602] duration metric: took 27.033164ms to restartPrimaryControlPlane
	I0110 10:07:47.837529  524195 kubeadm.go:403] duration metric: took 116.938881ms to StartCluster
	I0110 10:07:47.837545  524195 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:47.837619  524195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:47.839251  524195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:47.839651  524195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:07:47.840037  524195 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:07:47.840121  524195 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-820203"
	I0110 10:07:47.840139  524195 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-820203"
	W0110 10:07:47.840145  524195 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:07:47.840171  524195 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:07:47.840768  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.841058  524195 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:47.841223  524195 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-820203"
	I0110 10:07:47.841243  524195 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-820203"
	W0110 10:07:47.841251  524195 addons.go:248] addon dashboard should already be in state true
	I0110 10:07:47.841277  524195 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:07:47.841719  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.841867  524195 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-820203"
	I0110 10:07:47.841897  524195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-820203"
	I0110 10:07:47.842169  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.843734  524195 out.go:179] * Verifying Kubernetes components...
	I0110 10:07:47.847284  524195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:47.890484  524195 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:07:47.893447  524195 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:07:47.896587  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:07:47.896619  524195 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:07:47.896699  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:47.900114  524195 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-820203"
	W0110 10:07:47.900149  524195 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:07:47.900177  524195 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:07:47.900679  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.908999  524195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0110 10:07:44.969036  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:47.468895  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	I0110 10:07:47.922738  524195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:07:47.922762  524195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:07:47.922828  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:47.938587  524195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:07:47.938610  524195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:07:47.938673  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:47.970055  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:47.990112  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:47.990644  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:48.206913  524195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:07:48.222662  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:07:48.222737  524195 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:07:48.250094  524195 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-820203" to be "Ready" ...
	I0110 10:07:48.252266  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:07:48.252285  524195 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:07:48.273723  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:07:48.273783  524195 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:07:48.277115  524195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:07:48.315310  524195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:07:48.322276  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:07:48.322349  524195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:07:48.392387  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:07:48.392468  524195 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:07:48.470174  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:07:48.470246  524195 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:07:48.499913  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:07:48.499986  524195 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:07:48.549939  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:07:48.550011  524195 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:07:48.571806  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:07:48.571883  524195 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:07:48.598820  524195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:07:51.204622  524195 node_ready.go:49] node "default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:51.204653  524195 node_ready.go:38] duration metric: took 2.954477041s for node "default-k8s-diff-port-820203" to be "Ready" ...
	I0110 10:07:51.204668  524195 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:07:51.204750  524195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:07:51.313631  524195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.036437944s)
	I0110 10:07:52.315983  524195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.00060309s)
	I0110 10:07:52.316146  524195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.717206205s)
	I0110 10:07:52.316294  524195 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.111515259s)
	I0110 10:07:52.316317  524195 api_server.go:72] duration metric: took 4.476632988s to wait for apiserver process to appear ...
	I0110 10:07:52.316336  524195 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:07:52.316361  524195 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 10:07:52.319303  524195 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-820203 addons enable metrics-server
	
	I0110 10:07:52.322458  524195 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W0110 10:07:49.469448  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:51.970994  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	I0110 10:07:52.327005  524195 addons.go:530] duration metric: took 4.486965543s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I0110 10:07:52.329413  524195 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:07:52.329442  524195 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:07:52.817039  524195 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 10:07:52.826315  524195 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 10:07:52.828918  524195 api_server.go:141] control plane version: v1.35.0
	I0110 10:07:52.828941  524195 api_server.go:131] duration metric: took 512.592293ms to wait for apiserver health ...
	I0110 10:07:52.828950  524195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:07:52.836639  524195 system_pods.go:59] 8 kube-system pods found
	I0110 10:07:52.836679  524195 system_pods.go:61] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:52.836689  524195 system_pods.go:61] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:07:52.836718  524195 system_pods.go:61] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 10:07:52.836733  524195 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:52.836742  524195 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:07:52.836752  524195 system_pods.go:61] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 10:07:52.836759  524195 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:07:52.836808  524195 system_pods.go:61] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:52.836822  524195 system_pods.go:74] duration metric: took 7.866682ms to wait for pod list to return data ...
	I0110 10:07:52.836841  524195 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:07:52.840474  524195 default_sa.go:45] found service account: "default"
	I0110 10:07:52.840620  524195 default_sa.go:55] duration metric: took 3.763913ms for default service account to be created ...
	I0110 10:07:52.840639  524195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:07:52.844727  524195 system_pods.go:86] 8 kube-system pods found
	I0110 10:07:52.844763  524195 system_pods.go:89] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:52.844774  524195 system_pods.go:89] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:07:52.844813  524195 system_pods.go:89] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 10:07:52.844830  524195 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:52.844838  524195 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:07:52.844854  524195 system_pods.go:89] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 10:07:52.844861  524195 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:07:52.844889  524195 system_pods.go:89] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:52.844897  524195 system_pods.go:126] duration metric: took 4.252291ms to wait for k8s-apps to be running ...
	I0110 10:07:52.844905  524195 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:07:52.844974  524195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:07:52.870223  524195 system_svc.go:56] duration metric: took 25.307603ms WaitForService to wait for kubelet
	I0110 10:07:52.870254  524195 kubeadm.go:587] duration metric: took 5.030568977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:52.870306  524195 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:07:52.873549  524195 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:07:52.873581  524195 node_conditions.go:123] node cpu capacity is 2
	I0110 10:07:52.873618  524195 node_conditions.go:105] duration metric: took 3.298681ms to run NodePressure ...
	I0110 10:07:52.873636  524195 start.go:242] waiting for startup goroutines ...
	I0110 10:07:52.873648  524195 start.go:247] waiting for cluster config update ...
	I0110 10:07:52.873671  524195 start.go:256] writing updated cluster config ...
	I0110 10:07:52.873964  524195 ssh_runner.go:195] Run: rm -f paused
	I0110 10:07:52.878537  524195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:07:52.883246  524195 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5kgtf" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 10:07:54.467376  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:56.968277  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:54.888662  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:07:56.898778  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	I0110 10:07:58.967724  521204 pod_ready.go:94] pod "coredns-7d764666f9-ct6xj" is "Ready"
	I0110 10:07:58.967757  521204 pod_ready.go:86] duration metric: took 37.006147568s for pod "coredns-7d764666f9-ct6xj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.970996  521204 pod_ready.go:83] waiting for pod "etcd-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.978892  521204 pod_ready.go:94] pod "etcd-embed-certs-219333" is "Ready"
	I0110 10:07:58.978925  521204 pod_ready.go:86] duration metric: took 7.895098ms for pod "etcd-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.981655  521204 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.987371  521204 pod_ready.go:94] pod "kube-apiserver-embed-certs-219333" is "Ready"
	I0110 10:07:58.987402  521204 pod_ready.go:86] duration metric: took 5.712356ms for pod "kube-apiserver-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.989645  521204 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.165905  521204 pod_ready.go:94] pod "kube-controller-manager-embed-certs-219333" is "Ready"
	I0110 10:07:59.165935  521204 pod_ready.go:86] duration metric: took 176.249632ms for pod "kube-controller-manager-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.364924  521204 pod_ready.go:83] waiting for pod "kube-proxy-gplbn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.765651  521204 pod_ready.go:94] pod "kube-proxy-gplbn" is "Ready"
	I0110 10:07:59.765678  521204 pod_ready.go:86] duration metric: took 400.727049ms for pod "kube-proxy-gplbn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.965678  521204 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:00.367211  521204 pod_ready.go:94] pod "kube-scheduler-embed-certs-219333" is "Ready"
	I0110 10:08:00.367242  521204 pod_ready.go:86] duration metric: took 401.538684ms for pod "kube-scheduler-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:00.367257  521204 pod_ready.go:40] duration metric: took 38.411725987s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:08:00.547032  521204 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:08:00.550683  521204 out.go:203] 
	W0110 10:08:00.554038  521204 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:08:00.557577  521204 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:08:00.560853  521204 out.go:179] * Done! kubectl is now configured to use "embed-certs-219333" cluster and "default" namespace by default
	W0110 10:07:59.389112  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:01.889585  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:03.889995  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:06.389472  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:08.889571  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:11.388761  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:13.405059  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 10:07:51 embed-certs-219333 crio[664]: time="2026-01-10T10:07:51.990102499Z" level=info msg="Started container" PID=1683 containerID=5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe description=kube-system/storage-provisioner/storage-provisioner id=c26a9c40-a132-4f3c-8822-102606b05f68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5900701b1d75ce70db3752005fd6c576f664114417781d4fcc298dd6fac20f9d
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.63521867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.635261083Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.643028968Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.643212484Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.652590252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.652623237Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.656989531Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.657025478Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.657051209Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.662877076Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.662915583Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.621434336Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c313ff00-9a16-48b5-ba02-1f3f45c39945 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.622649083Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ee205fd1-0b0d-4bcc-9f14-9a726fdf6a0f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.624532148Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper" id=14b66cf1-681b-482f-a5f8-3ccf18a4b3bb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.624638544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.631737038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.632449474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.646576981Z" level=info msg="Created container 1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper" id=14b66cf1-681b-482f-a5f8-3ccf18a4b3bb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.648554454Z" level=info msg="Starting container: 1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82" id=5c1d2b78-9d7d-452b-bf1a-e54e101f8b98 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.650446175Z" level=info msg="Started container" PID=1761 containerID=1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper id=5c1d2b78-9d7d-452b-bf1a-e54e101f8b98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=531e6931d5b3d4b550549da5daf51f7ccb0665086d3c786b7388279250bbefff
	Jan 10 10:08:07 embed-certs-219333 conmon[1759]: conmon 1e050b12764e82291592 <ninfo>: container 1761 exited with status 1
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.972078344Z" level=info msg="Removing container: e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16" id=55ac013c-8fd0-4d4e-8035-a684dba80717 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.98043723Z" level=info msg="Error loading conmon cgroup of container e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16: cgroup deleted" id=55ac013c-8fd0-4d4e-8035-a684dba80717 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.989011781Z" level=info msg="Removed container e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper" id=55ac013c-8fd0-4d4e-8035-a684dba80717 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1e050b12764e8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   531e6931d5b3d       dashboard-metrics-scraper-867fb5f87b-ffhlr   kubernetes-dashboard
	5854ed490a60a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago      Running             storage-provisioner         2                   5900701b1d75c       storage-provisioner                          kube-system
	2b627d4e4087c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago      Running             kubernetes-dashboard        0                   51b0ca0ce3ad7       kubernetes-dashboard-b84665fb8-vqjzg         kubernetes-dashboard
	275901198dfed       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           54 seconds ago      Running             coredns                     1                   439236438a8e1       coredns-7d764666f9-ct6xj                     kube-system
	f1a8a35556f78       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           54 seconds ago      Running             kindnet-cni                 1                   f723dd8f17af1       kindnet-px8l8                                kube-system
	02f011561bf27       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago      Exited              storage-provisioner         1                   5900701b1d75c       storage-provisioner                          kube-system
	6c971b45dcd98       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago      Running             busybox                     1                   690eb2207f32b       busybox                                      default
	a259a4eaa5cdd       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           54 seconds ago      Running             kube-proxy                  1                   fc60080b49b3b       kube-proxy-gplbn                             kube-system
	d60025e9eaf7a       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           59 seconds ago      Running             kube-scheduler              1                   18bb27896ba85       kube-scheduler-embed-certs-219333            kube-system
	23d9f7d67b998       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           59 seconds ago      Running             etcd                        1                   0e46f0880c8f1       etcd-embed-certs-219333                      kube-system
	34471ac06a086       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           59 seconds ago      Running             kube-apiserver              1                   38c465dd4ae0e       kube-apiserver-embed-certs-219333            kube-system
	cd78f3af49f4d       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           59 seconds ago      Running             kube-controller-manager     1                   02fee33deac91       kube-controller-manager-embed-certs-219333   kube-system
	
	
	==> coredns [275901198dfed0ab5abf24fc0360d62641f7d4317960467dfe338bdcdf590319] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43639 - 18520 "HINFO IN 5884763266270616804.3985676154169116676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021087013s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-219333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-219333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=embed-certs-219333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-219333
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-219333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                d1d7a876-2a30-486f-839c-2eda89461ed8
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-ct6xj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-embed-certs-219333                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-px8l8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-219333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-219333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-gplbn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-219333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ffhlr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vqjzg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node embed-certs-219333 event: Registered Node embed-certs-219333 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node embed-certs-219333 event: Registered Node embed-certs-219333 in Controller
	
	
	==> dmesg <==
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [23d9f7d67b99820f29a228986440deb42a7643b108034bd10629d2cd7e74d814] <==
	{"level":"info","ts":"2026-01-10T10:07:16.468905Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:07:16.469121Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:16.469131Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:16.469974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:07:16.470034Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:07:16.470097Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:07:16.466396Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T10:07:17.225928Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:17.226081Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:17.226167Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:17.226212Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:17.226257Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.227723Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.227792Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:17.227836Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.227873Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.230183Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-219333 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:07:17.230277Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:17.230524Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:17.233242Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:17.238273Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:17.257566Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:07:17.268645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:17.268691Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:17.316892Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:08:15 up  2:50,  0 user,  load average: 2.02, 1.80, 1.93
	Linux embed-certs-219333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a8a35556f782af9945790a1d2ff1e2bbda5bb31002b8141b6ce3a4fa1c5845] <==
	I0110 10:07:21.425664       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:07:21.426077       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:07:21.426265       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:07:21.426317       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:07:21.426355       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:07:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:07:21.627189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:07:21.627271       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:07:21.627314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:07:21.627583       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:07:51.627503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:07:51.627736       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 10:07:51.627827       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:07:51.627904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0110 10:07:53.028028       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:07:53.028060       1 metrics.go:72] Registering metrics
	I0110 10:07:53.028133       1 controller.go:711] "Syncing nftables rules"
	I0110 10:08:01.626683       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:08:01.627649       1 main.go:301] handling current node
	I0110 10:08:11.627576       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:08:11.627611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34471ac06a0868183f7bbf12a60eede49ca6265f4f8b78f35058634a2296e139] <==
	I0110 10:07:19.999556       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:07:20.011988       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:20.011988       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 10:07:20.012081       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 10:07:20.012016       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 10:07:20.012028       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0110 10:07:20.018828       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 10:07:20.021968       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 10:07:20.035907       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:07:20.080211       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:20.080820       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:07:20.080864       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:20.080871       1 policy_source.go:248] refreshing policies
	I0110 10:07:20.128141       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:07:20.658294       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:07:20.712406       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:07:20.729258       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:07:20.868020       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:07:20.978874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:07:21.024467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:07:21.255073       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.43.176"}
	I0110 10:07:21.331402       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.238.158"}
	I0110 10:07:23.422784       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:07:23.702343       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:07:23.823846       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cd78f3af49f4d143a1ec414506ec5513f9eff8215806fa0cf31e02e797a439b2] <==
	I0110 10:07:23.166112       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:07:23.166117       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:07:23.166196       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166231       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.165912       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166424       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166518       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166595       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166637       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166671       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166903       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.167596       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.165901       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.168770       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 10:07:23.168846       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-219333"
	I0110 10:07:23.168901       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 10:07:23.165822       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.165919       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.170844       1 range_allocator.go:177] "Sending events to api server"
	I0110 10:07:23.170977       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 10:07:23.170991       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:23.171003       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.171185       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.231957       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.733231       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [a259a4eaa5cdd0e3daddb79fd0994ee011e36fc39d0c7e6328c070219bb7520b] <==
	I0110 10:07:21.484697       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:07:21.593148       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:21.699831       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:21.699881       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:07:21.700135       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:07:21.720105       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:07:21.720160       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:07:21.735120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:07:21.735620       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:07:21.735884       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:21.737295       1 config.go:200] "Starting service config controller"
	I0110 10:07:21.737362       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:07:21.737407       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:07:21.737451       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:07:21.737488       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:07:21.737523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:07:21.747744       1 config.go:309] "Starting node config controller"
	I0110 10:07:21.747836       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:07:21.747867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:07:21.838561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:07:21.838678       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:07:21.838692       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d60025e9eaf7adb52700e8aca2a8a63d05e321eb59e4e696674205d1577f88e6] <==
	I0110 10:07:18.479649       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:07:19.948245       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:07:19.948275       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:07:19.948283       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:07:19.948290       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:07:20.041559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:07:20.041677       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:20.048362       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:07:20.048520       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:07:20.048532       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:20.048547       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:07:20.151511       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:07:35 embed-certs-219333 kubelet[792]: I0110 10:07:35.852678     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:35 embed-certs-219333 kubelet[792]: E0110 10:07:35.852902     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:36 embed-certs-219333 kubelet[792]: E0110 10:07:36.853896     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:36 embed-certs-219333 kubelet[792]: I0110 10:07:36.853935     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:36 embed-certs-219333 kubelet[792]: E0110 10:07:36.854097     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: E0110 10:07:46.621032     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: I0110 10:07:46.621089     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: I0110 10:07:46.879193     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: E0110 10:07:46.879540     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: I0110 10:07:46.879915     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: E0110 10:07:46.880139     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:51 embed-certs-219333 kubelet[792]: I0110 10:07:51.905494     792 scope.go:122] "RemoveContainer" containerID="02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee"
	Jan 10 10:07:55 embed-certs-219333 kubelet[792]: E0110 10:07:55.802457     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:55 embed-certs-219333 kubelet[792]: I0110 10:07:55.803065     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:07:55 embed-certs-219333 kubelet[792]: E0110 10:07:55.803420     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:58 embed-certs-219333 kubelet[792]: E0110 10:07:58.853023     792 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ct6xj" containerName="coredns"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: E0110 10:08:07.620640     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: I0110 10:08:07.620698     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: I0110 10:08:07.964647     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: E0110 10:08:07.965018     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: I0110 10:08:07.965047     792 scope.go:122] "RemoveContainer" containerID="1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: E0110 10:08:07.965195     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:08:12 embed-certs-219333 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:08:12 embed-certs-219333 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:08:12 embed-certs-219333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2b627d4e4087c16b83689a421c12c3fdc4bd39321c0bcfefeb33bbe33ccfbcbd] <==
	2026/01/10 10:07:29 Using namespace: kubernetes-dashboard
	2026/01/10 10:07:29 Using in-cluster config to connect to apiserver
	2026/01/10 10:07:29 Using secret token for csrf signing
	2026/01/10 10:07:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:07:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:07:29 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 10:07:29 Generating JWE encryption key
	2026/01/10 10:07:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:07:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:07:30 Initializing JWE encryption key from synchronized object
	2026/01/10 10:07:30 Creating in-cluster Sidecar client
	2026/01/10 10:07:30 Serving insecurely on HTTP port: 9090
	2026/01/10 10:07:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:08:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:07:29 Starting overwatch
	
	
	==> storage-provisioner [02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee] <==
	I0110 10:07:21.347288       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:07:51.352681       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe] <==
	I0110 10:07:52.012056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:07:52.035456       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:07:52.035606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:07:52.038527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:55.495640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:59.756378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:03.360794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:06.414793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:09.436320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:09.443227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:09.443479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:08:09.443633       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-219333_50b4f215-e74a-4fb5-893b-6b4159b01f30!
	I0110 10:08:09.443955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17aa7d93-7fb8-45e3-85a5-4943a2914558", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-219333_50b4f215-e74a-4fb5-893b-6b4159b01f30 became leader
	W0110 10:08:09.454424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:09.457924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:09.544453       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-219333_50b4f215-e74a-4fb5-893b-6b4159b01f30!
	W0110 10:08:11.460462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:11.472946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:13.480747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:13.488554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:15.491971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:15.496862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-219333 -n embed-certs-219333
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-219333 -n embed-certs-219333: exit status 2 (359.697132ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-219333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-219333
helpers_test.go:244: (dbg) docker inspect embed-certs-219333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51",
	        "Created": "2026-01-10T10:06:01.259250049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 521328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:07:08.535648616Z",
	            "FinishedAt": "2026-01-10T10:07:07.612219537Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/hostname",
	        "HostsPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/hosts",
	        "LogPath": "/var/lib/docker/containers/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51/11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51-json.log",
	        "Name": "/embed-certs-219333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-219333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-219333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "11d72dc06eff5234cfda21e66c6236f5afae9ab840756e31c0f670707f174f51",
	                "LowerDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/merged",
	                "UpperDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/diff",
	                "WorkDir": "/var/lib/docker/overlay2/264d793a3aa3cf5353599bdc43b010a93ad0b73ac9abae5561ea736c4c485579/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-219333",
	                "Source": "/var/lib/docker/volumes/embed-certs-219333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-219333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-219333",
	                "name.minikube.sigs.k8s.io": "embed-certs-219333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a9396cfae0d1094ece603c968d06179a28de8a026bf5910df569afb982a624c5",
	            "SandboxKey": "/var/run/docker/netns/a9396cfae0d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-219333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:8b:83:d5:dd:fb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8d1e980d25c729b4e5350b1ccfb2f436b31893785314b40506467e9431269ca0",
	                    "EndpointID": "e72032122ab56d42e7caaa4fb6d93c9b2ce2798cb7f554deb37eab3523ecaa14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-219333",
	                        "11d72dc06eff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333: exit status 2 (367.184945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-219333 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-219333 logs -n 25: (1.354945368s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-729486                                                                                                                                                │ old-k8s-version-729486       │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:03 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:03 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable metrics-server -p no-preload-964204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │                     │
	│ stop    │ -p no-preload-964204 --alsologtostderr -v=3                                                                                                                              │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ addons  │ enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:04 UTC │
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                  │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                               │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                              │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                     │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                     │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                     │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                             │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                          │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ stop    │ -p embed-certs-219333 --alsologtostderr -v=3                                                                                                                             │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                              │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                             │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:07:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:07:39.332488  524195 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:07:39.332700  524195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:07:39.332712  524195 out.go:374] Setting ErrFile to fd 2...
	I0110 10:07:39.332718  524195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:07:39.333117  524195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:07:39.333604  524195 out.go:368] Setting JSON to false
	I0110 10:07:39.334638  524195 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10209,"bootTime":1768029451,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:07:39.334730  524195 start.go:143] virtualization:  
	I0110 10:07:39.337876  524195 out.go:179] * [default-k8s-diff-port-820203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:07:39.341769  524195 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:07:39.341812  524195 notify.go:221] Checking for updates...
	I0110 10:07:39.347657  524195 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:07:39.350629  524195 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:39.353532  524195 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:07:39.356364  524195 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:07:39.359120  524195 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:07:39.362503  524195 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:39.363056  524195 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:07:39.390536  524195 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:07:39.390666  524195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:07:39.453784  524195 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:07:39.444669604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:07:39.453884  524195 docker.go:319] overlay module found
	I0110 10:07:39.456998  524195 out.go:179] * Using the docker driver based on existing profile
	I0110 10:07:39.459868  524195 start.go:309] selected driver: docker
	I0110 10:07:39.459886  524195 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:39.459989  524195 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:07:39.460744  524195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:07:39.511105  524195 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:07:39.501483651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:07:39.511548  524195 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:39.511588  524195 cni.go:84] Creating CNI manager for ""
	I0110 10:07:39.511639  524195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:07:39.511683  524195 start.go:353] cluster config:
	{Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:39.514790  524195 out.go:179] * Starting "default-k8s-diff-port-820203" primary control-plane node in "default-k8s-diff-port-820203" cluster
	I0110 10:07:39.517665  524195 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:07:39.520459  524195 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:07:39.523232  524195 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:07:39.523281  524195 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:07:39.523291  524195 cache.go:65] Caching tarball of preloaded images
	I0110 10:07:39.523339  524195 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:07:39.523386  524195 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:07:39.523396  524195 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:07:39.523501  524195 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json ...
	I0110 10:07:39.543249  524195 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:07:39.543272  524195 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:07:39.543287  524195 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:07:39.543318  524195 start.go:360] acquireMachinesLock for default-k8s-diff-port-820203: {Name:mkaca248efde78a9e4798a5020ca02bdd83351f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:07:39.543376  524195 start.go:364] duration metric: took 35.734µs to acquireMachinesLock for "default-k8s-diff-port-820203"
	I0110 10:07:39.543408  524195 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:07:39.543417  524195 fix.go:54] fixHost starting: 
	I0110 10:07:39.543676  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:39.560306  524195 fix.go:112] recreateIfNeeded on default-k8s-diff-port-820203: state=Stopped err=<nil>
	W0110 10:07:39.560341  524195 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 10:07:38.467572  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:40.967167  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:42.968543  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	I0110 10:07:39.563568  524195 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-820203" ...
	I0110 10:07:39.563657  524195 cli_runner.go:164] Run: docker start default-k8s-diff-port-820203
	I0110 10:07:39.824821  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:39.843626  524195 kic.go:430] container "default-k8s-diff-port-820203" state is running.
	I0110 10:07:39.844024  524195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:07:39.871661  524195 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/config.json ...
	I0110 10:07:39.871899  524195 machine.go:94] provisionDockerMachine start ...
	I0110 10:07:39.872537  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:39.900673  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:39.901363  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:39.901379  524195 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:07:39.902024  524195 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 10:07:43.064450  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-820203
	
	I0110 10:07:43.064475  524195 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-820203"
	I0110 10:07:43.064559  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:43.082455  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:43.082807  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:43.082826  524195 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-820203 && echo "default-k8s-diff-port-820203" | sudo tee /etc/hostname
	I0110 10:07:43.242155  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-820203
	
	I0110 10:07:43.242279  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:43.260356  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:43.260734  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:43.260752  524195 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-820203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-820203/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-820203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:07:43.409166  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:07:43.409192  524195 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:07:43.409237  524195 ubuntu.go:190] setting up certificates
	I0110 10:07:43.409252  524195 provision.go:84] configureAuth start
	I0110 10:07:43.409326  524195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:07:43.428147  524195 provision.go:143] copyHostCerts
	I0110 10:07:43.428222  524195 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:07:43.428243  524195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:07:43.428327  524195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:07:43.428675  524195 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:07:43.428690  524195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:07:43.428733  524195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:07:43.428810  524195 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:07:43.428820  524195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:07:43.428848  524195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:07:43.428904  524195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-820203 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-820203 localhost minikube]
	I0110 10:07:44.116522  524195 provision.go:177] copyRemoteCerts
	I0110 10:07:44.116594  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:07:44.116638  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.137350  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.240350  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:07:44.257931  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 10:07:44.275750  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 10:07:44.294014  524195 provision.go:87] duration metric: took 884.74014ms to configureAuth
	I0110 10:07:44.294043  524195 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:07:44.294264  524195 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:44.294411  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.312921  524195 main.go:144] libmachine: Using SSH client type: native
	I0110 10:07:44.313236  524195 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33454 <nil> <nil>}
	I0110 10:07:44.313259  524195 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:07:44.654373  524195 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:07:44.654394  524195 machine.go:97] duration metric: took 4.782481319s to provisionDockerMachine
	I0110 10:07:44.654404  524195 start.go:293] postStartSetup for "default-k8s-diff-port-820203" (driver="docker")
	I0110 10:07:44.654420  524195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:07:44.654494  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:07:44.654531  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.674774  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.780741  524195 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:07:44.784105  524195 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:07:44.784136  524195 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:07:44.784149  524195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:07:44.784214  524195 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:07:44.784294  524195 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:07:44.784402  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:07:44.792208  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:07:44.810298  524195 start.go:296] duration metric: took 155.873973ms for postStartSetup
	I0110 10:07:44.810449  524195 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:07:44.810548  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.829335  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.935654  524195 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:07:44.940841  524195 fix.go:56] duration metric: took 5.397418022s for fixHost
	I0110 10:07:44.940878  524195 start.go:83] releasing machines lock for "default-k8s-diff-port-820203", held for 5.397476049s
	I0110 10:07:44.940983  524195 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-820203
	I0110 10:07:44.957856  524195 ssh_runner.go:195] Run: cat /version.json
	I0110 10:07:44.957911  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.958180  524195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:07:44.958252  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:44.983893  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:44.984806  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:45.249263  524195 ssh_runner.go:195] Run: systemctl --version
	I0110 10:07:45.257466  524195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:07:45.307158  524195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:07:45.314578  524195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:07:45.314722  524195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:07:45.323169  524195 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:07:45.323196  524195 start.go:496] detecting cgroup driver to use...
	I0110 10:07:45.323259  524195 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:07:45.323324  524195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:07:45.339496  524195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:07:45.353091  524195 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:07:45.353188  524195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:07:45.369377  524195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:07:45.383131  524195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:07:45.513443  524195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:07:45.634131  524195 docker.go:234] disabling docker service ...
	I0110 10:07:45.634201  524195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:07:45.652435  524195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:07:45.667355  524195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:07:45.792617  524195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:07:45.908758  524195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:07:45.923400  524195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:07:45.942454  524195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:07:45.942534  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.952701  524195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:07:45.952786  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.964902  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.976215  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:45.986196  524195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:07:45.994628  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:46.006861  524195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:46.018719  524195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:07:46.030101  524195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:07:46.038690  524195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:07:46.047488  524195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:46.172945  524195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:07:46.343439  524195 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:07:46.343598  524195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:07:46.347514  524195 start.go:574] Will wait 60s for crictl version
	I0110 10:07:46.347623  524195 ssh_runner.go:195] Run: which crictl
	I0110 10:07:46.351340  524195 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:07:46.375416  524195 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:07:46.375569  524195 ssh_runner.go:195] Run: crio --version
	I0110 10:07:46.407314  524195 ssh_runner.go:195] Run: crio --version
	I0110 10:07:46.442766  524195 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:07:46.445631  524195 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-820203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:07:46.461271  524195 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 10:07:46.465532  524195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:07:46.477495  524195 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:07:46.477623  524195 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:07:46.477696  524195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:07:46.518374  524195 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:07:46.518401  524195 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:07:46.518458  524195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:07:46.545794  524195 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:07:46.545816  524195 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:07:46.545825  524195 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 10:07:46.545914  524195 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-820203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:07:46.545995  524195 ssh_runner.go:195] Run: crio config
	I0110 10:07:46.627783  524195 cni.go:84] Creating CNI manager for ""
	I0110 10:07:46.627806  524195 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:07:46.627828  524195 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:07:46.627852  524195 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-820203 NodeName:default-k8s-diff-port-820203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:07:46.628009  524195 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-820203"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:07:46.628108  524195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:07:46.642550  524195 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:07:46.642636  524195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:07:46.650703  524195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 10:07:46.670426  524195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:07:46.688235  524195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0110 10:07:46.702905  524195 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:07:46.706570  524195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:07:46.717934  524195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:46.839094  524195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:07:46.855566  524195 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203 for IP: 192.168.85.2
	I0110 10:07:46.855588  524195 certs.go:195] generating shared ca certs ...
	I0110 10:07:46.855605  524195 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:46.855739  524195 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:07:46.855790  524195 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:07:46.855802  524195 certs.go:257] generating profile certs ...
	I0110 10:07:46.855896  524195 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.key
	I0110 10:07:46.855967  524195 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key.15c00bf5
	I0110 10:07:46.856019  524195 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key
	I0110 10:07:46.856131  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:07:46.856167  524195 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:07:46.856178  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:07:46.856205  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:07:46.856235  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:07:46.856260  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:07:46.856316  524195 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:07:46.857228  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:07:46.907620  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:07:46.952109  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:07:46.978879  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:07:47.003982  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 10:07:47.027547  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:07:47.052846  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:07:47.083361  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:07:47.118506  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:07:47.137784  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:07:47.156374  524195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:07:47.174898  524195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:07:47.190134  524195 ssh_runner.go:195] Run: openssl version
	I0110 10:07:47.196073  524195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.203183  524195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:07:47.210601  524195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.214348  524195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.214455  524195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:07:47.255579  524195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:07:47.263384  524195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.270610  524195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:07:47.277921  524195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.281762  524195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.281830  524195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:07:47.324305  524195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:07:47.331763  524195 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.339155  524195 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:07:47.346592  524195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.350484  524195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.350553  524195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:07:47.391178  524195 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:07:47.398708  524195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:07:47.402455  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:07:47.444330  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:07:47.488328  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:07:47.537615  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:07:47.580711  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:07:47.636127  524195 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:07:47.720619  524195 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-820203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-820203 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:07:47.720723  524195 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:07:47.720819  524195 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:07:47.780336  524195 cri.go:96] found id: "c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde"
	I0110 10:07:47.780376  524195 cri.go:96] found id: "9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7"
	I0110 10:07:47.780383  524195 cri.go:96] found id: "812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47"
	I0110 10:07:47.780415  524195 cri.go:96] found id: "91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9"
	I0110 10:07:47.780427  524195 cri.go:96] found id: ""
	I0110 10:07:47.780509  524195 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:07:47.797503  524195 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:07:47Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:07:47.797635  524195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:07:47.810436  524195 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:07:47.810457  524195 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:07:47.810550  524195 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:07:47.823642  524195 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:07:47.824591  524195 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-820203" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:47.825192  524195 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-820203" cluster setting kubeconfig missing "default-k8s-diff-port-820203" context setting]
	I0110 10:07:47.826193  524195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:47.828083  524195 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:07:47.837461  524195 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 10:07:47.837503  524195 kubeadm.go:602] duration metric: took 27.033164ms to restartPrimaryControlPlane
	I0110 10:07:47.837529  524195 kubeadm.go:403] duration metric: took 116.938881ms to StartCluster
	I0110 10:07:47.837545  524195 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:47.837619  524195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:07:47.839251  524195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:07:47.839651  524195 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:07:47.840037  524195 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:07:47.840121  524195 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-820203"
	I0110 10:07:47.840139  524195 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-820203"
	W0110 10:07:47.840145  524195 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:07:47.840171  524195 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:07:47.840768  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.841058  524195 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:07:47.841223  524195 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-820203"
	I0110 10:07:47.841243  524195 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-820203"
	W0110 10:07:47.841251  524195 addons.go:248] addon dashboard should already be in state true
	I0110 10:07:47.841277  524195 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:07:47.841719  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.841867  524195 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-820203"
	I0110 10:07:47.841897  524195 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-820203"
	I0110 10:07:47.842169  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.843734  524195 out.go:179] * Verifying Kubernetes components...
	I0110 10:07:47.847284  524195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:07:47.890484  524195 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:07:47.893447  524195 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:07:47.896587  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:07:47.896619  524195 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:07:47.896699  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:47.900114  524195 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-820203"
	W0110 10:07:47.900149  524195 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:07:47.900177  524195 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:07:47.900679  524195 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:07:47.908999  524195 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0110 10:07:44.969036  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:47.468895  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	I0110 10:07:47.922738  524195 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:07:47.922762  524195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:07:47.922828  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:47.938587  524195 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:07:47.938610  524195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:07:47.938673  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:07:47.970055  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:47.990112  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:47.990644  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:07:48.206913  524195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:07:48.222662  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:07:48.222737  524195 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:07:48.250094  524195 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-820203" to be "Ready" ...
	I0110 10:07:48.252266  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:07:48.252285  524195 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:07:48.273723  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:07:48.273783  524195 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:07:48.277115  524195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:07:48.315310  524195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:07:48.322276  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:07:48.322349  524195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:07:48.392387  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:07:48.392468  524195 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:07:48.470174  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:07:48.470246  524195 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:07:48.499913  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:07:48.499986  524195 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:07:48.549939  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:07:48.550011  524195 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:07:48.571806  524195 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:07:48.571883  524195 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:07:48.598820  524195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:07:51.204622  524195 node_ready.go:49] node "default-k8s-diff-port-820203" is "Ready"
	I0110 10:07:51.204653  524195 node_ready.go:38] duration metric: took 2.954477041s for node "default-k8s-diff-port-820203" to be "Ready" ...
	I0110 10:07:51.204668  524195 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:07:51.204750  524195 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:07:51.313631  524195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.036437944s)
	I0110 10:07:52.315983  524195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.00060309s)
	I0110 10:07:52.316146  524195 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.717206205s)
	I0110 10:07:52.316294  524195 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.111515259s)
	I0110 10:07:52.316317  524195 api_server.go:72] duration metric: took 4.476632988s to wait for apiserver process to appear ...
	I0110 10:07:52.316336  524195 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:07:52.316361  524195 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 10:07:52.319303  524195 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-820203 addons enable metrics-server
	
	I0110 10:07:52.322458  524195 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	W0110 10:07:49.469448  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:51.970994  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	I0110 10:07:52.327005  524195 addons.go:530] duration metric: took 4.486965543s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I0110 10:07:52.329413  524195 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:07:52.329442  524195 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:07:52.817039  524195 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 10:07:52.826315  524195 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 10:07:52.828918  524195 api_server.go:141] control plane version: v1.35.0
	I0110 10:07:52.828941  524195 api_server.go:131] duration metric: took 512.592293ms to wait for apiserver health ...
	I0110 10:07:52.828950  524195 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:07:52.836639  524195 system_pods.go:59] 8 kube-system pods found
	I0110 10:07:52.836679  524195 system_pods.go:61] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:52.836689  524195 system_pods.go:61] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:07:52.836718  524195 system_pods.go:61] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 10:07:52.836733  524195 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:52.836742  524195 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:07:52.836752  524195 system_pods.go:61] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 10:07:52.836759  524195 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:07:52.836808  524195 system_pods.go:61] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:52.836822  524195 system_pods.go:74] duration metric: took 7.866682ms to wait for pod list to return data ...
	I0110 10:07:52.836841  524195 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:07:52.840474  524195 default_sa.go:45] found service account: "default"
	I0110 10:07:52.840620  524195 default_sa.go:55] duration metric: took 3.763913ms for default service account to be created ...
	I0110 10:07:52.840639  524195 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 10:07:52.844727  524195 system_pods.go:86] 8 kube-system pods found
	I0110 10:07:52.844763  524195 system_pods.go:89] "coredns-7d764666f9-5kgtf" [9e03146c-d6d6-402a-8a86-8558a61c293a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 10:07:52.844774  524195 system_pods.go:89] "etcd-default-k8s-diff-port-820203" [b88953ce-244f-4cf7-a7b2-46390dea4e94] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:07:52.844813  524195 system_pods.go:89] "kindnet-kg5mk" [37256b6f-f68a-4674-a9b8-9985a45a1469] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 10:07:52.844830  524195 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-820203" [6ea2bae0-4962-4e3a-9255-6b2072677d16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:07:52.844838  524195 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-820203" [4b15e318-df17-4c04-b306-2f85d72d5b03] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:07:52.844854  524195 system_pods.go:89] "kube-proxy-h677z" [d7dc7e83-f97e-4c19-800c-5882ff43b0f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 10:07:52.844861  524195 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-820203" [aed335f9-7712-4dd1-8c66-6c984b34b4e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:07:52.844889  524195 system_pods.go:89] "storage-provisioner" [988b2cb8-be15-4bee-bc89-382c038a9348] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 10:07:52.844897  524195 system_pods.go:126] duration metric: took 4.252291ms to wait for k8s-apps to be running ...
	I0110 10:07:52.844905  524195 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 10:07:52.844974  524195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:07:52.870223  524195 system_svc.go:56] duration metric: took 25.307603ms WaitForService to wait for kubelet
	I0110 10:07:52.870254  524195 kubeadm.go:587] duration metric: took 5.030568977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:07:52.870306  524195 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:07:52.873549  524195 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:07:52.873581  524195 node_conditions.go:123] node cpu capacity is 2
	I0110 10:07:52.873618  524195 node_conditions.go:105] duration metric: took 3.298681ms to run NodePressure ...
	I0110 10:07:52.873636  524195 start.go:242] waiting for startup goroutines ...
	I0110 10:07:52.873648  524195 start.go:247] waiting for cluster config update ...
	I0110 10:07:52.873671  524195 start.go:256] writing updated cluster config ...
	I0110 10:07:52.873964  524195 ssh_runner.go:195] Run: rm -f paused
	I0110 10:07:52.878537  524195 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:07:52.883246  524195 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5kgtf" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 10:07:54.467376  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:56.968277  521204 pod_ready.go:104] pod "coredns-7d764666f9-ct6xj" is not "Ready", error: <nil>
	W0110 10:07:54.888662  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:07:56.898778  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	I0110 10:07:58.967724  521204 pod_ready.go:94] pod "coredns-7d764666f9-ct6xj" is "Ready"
	I0110 10:07:58.967757  521204 pod_ready.go:86] duration metric: took 37.006147568s for pod "coredns-7d764666f9-ct6xj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.970996  521204 pod_ready.go:83] waiting for pod "etcd-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.978892  521204 pod_ready.go:94] pod "etcd-embed-certs-219333" is "Ready"
	I0110 10:07:58.978925  521204 pod_ready.go:86] duration metric: took 7.895098ms for pod "etcd-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.981655  521204 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.987371  521204 pod_ready.go:94] pod "kube-apiserver-embed-certs-219333" is "Ready"
	I0110 10:07:58.987402  521204 pod_ready.go:86] duration metric: took 5.712356ms for pod "kube-apiserver-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:58.989645  521204 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.165905  521204 pod_ready.go:94] pod "kube-controller-manager-embed-certs-219333" is "Ready"
	I0110 10:07:59.165935  521204 pod_ready.go:86] duration metric: took 176.249632ms for pod "kube-controller-manager-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.364924  521204 pod_ready.go:83] waiting for pod "kube-proxy-gplbn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.765651  521204 pod_ready.go:94] pod "kube-proxy-gplbn" is "Ready"
	I0110 10:07:59.765678  521204 pod_ready.go:86] duration metric: took 400.727049ms for pod "kube-proxy-gplbn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:07:59.965678  521204 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:00.367211  521204 pod_ready.go:94] pod "kube-scheduler-embed-certs-219333" is "Ready"
	I0110 10:08:00.367242  521204 pod_ready.go:86] duration metric: took 401.538684ms for pod "kube-scheduler-embed-certs-219333" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:00.367257  521204 pod_ready.go:40] duration metric: took 38.411725987s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:08:00.547032  521204 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:08:00.550683  521204 out.go:203] 
	W0110 10:08:00.554038  521204 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:08:00.557577  521204 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:08:00.560853  521204 out.go:179] * Done! kubectl is now configured to use "embed-certs-219333" cluster and "default" namespace by default
	W0110 10:07:59.389112  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:01.889585  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:03.889995  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:06.389472  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:08.889571  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:11.388761  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:13.405059  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 10:07:51 embed-certs-219333 crio[664]: time="2026-01-10T10:07:51.990102499Z" level=info msg="Started container" PID=1683 containerID=5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe description=kube-system/storage-provisioner/storage-provisioner id=c26a9c40-a132-4f3c-8822-102606b05f68 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5900701b1d75ce70db3752005fd6c576f664114417781d4fcc298dd6fac20f9d
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.63521867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.635261083Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.643028968Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.643212484Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.652590252Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.652623237Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.656989531Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.657025478Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.657051209Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.662877076Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:01 embed-certs-219333 crio[664]: time="2026-01-10T10:08:01.662915583Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.621434336Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c313ff00-9a16-48b5-ba02-1f3f45c39945 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.622649083Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ee205fd1-0b0d-4bcc-9f14-9a726fdf6a0f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.624532148Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper" id=14b66cf1-681b-482f-a5f8-3ccf18a4b3bb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.624638544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.631737038Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.632449474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.646576981Z" level=info msg="Created container 1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper" id=14b66cf1-681b-482f-a5f8-3ccf18a4b3bb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.648554454Z" level=info msg="Starting container: 1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82" id=5c1d2b78-9d7d-452b-bf1a-e54e101f8b98 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.650446175Z" level=info msg="Started container" PID=1761 containerID=1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper id=5c1d2b78-9d7d-452b-bf1a-e54e101f8b98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=531e6931d5b3d4b550549da5daf51f7ccb0665086d3c786b7388279250bbefff
	Jan 10 10:08:07 embed-certs-219333 conmon[1759]: conmon 1e050b12764e82291592 <ninfo>: container 1761 exited with status 1
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.972078344Z" level=info msg="Removing container: e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16" id=55ac013c-8fd0-4d4e-8035-a684dba80717 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.98043723Z" level=info msg="Error loading conmon cgroup of container e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16: cgroup deleted" id=55ac013c-8fd0-4d4e-8035-a684dba80717 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:07 embed-certs-219333 crio[664]: time="2026-01-10T10:08:07.989011781Z" level=info msg="Removed container e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr/dashboard-metrics-scraper" id=55ac013c-8fd0-4d4e-8035-a684dba80717 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1e050b12764e8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   531e6931d5b3d       dashboard-metrics-scraper-867fb5f87b-ffhlr   kubernetes-dashboard
	5854ed490a60a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   5900701b1d75c       storage-provisioner                          kube-system
	2b627d4e4087c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   51b0ca0ce3ad7       kubernetes-dashboard-b84665fb8-vqjzg         kubernetes-dashboard
	275901198dfed       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           56 seconds ago       Running             coredns                     1                   439236438a8e1       coredns-7d764666f9-ct6xj                     kube-system
	f1a8a35556f78       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           56 seconds ago       Running             kindnet-cni                 1                   f723dd8f17af1       kindnet-px8l8                                kube-system
	02f011561bf27       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   5900701b1d75c       storage-provisioner                          kube-system
	6c971b45dcd98       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   690eb2207f32b       busybox                                      default
	a259a4eaa5cdd       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           56 seconds ago       Running             kube-proxy                  1                   fc60080b49b3b       kube-proxy-gplbn                             kube-system
	d60025e9eaf7a       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   18bb27896ba85       kube-scheduler-embed-certs-219333            kube-system
	23d9f7d67b998       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   0e46f0880c8f1       etcd-embed-certs-219333                      kube-system
	34471ac06a086       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   38c465dd4ae0e       kube-apiserver-embed-certs-219333            kube-system
	cd78f3af49f4d       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   02fee33deac91       kube-controller-manager-embed-certs-219333   kube-system
	
	
	==> coredns [275901198dfed0ab5abf24fc0360d62641f7d4317960467dfe338bdcdf590319] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43639 - 18520 "HINFO IN 5884763266270616804.3985676154169116676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021087013s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-219333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-219333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=embed-certs-219333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_06_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-219333
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:08:11 +0000   Sat, 10 Jan 2026 10:06:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-219333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                d1d7a876-2a30-486f-839c-2eda89461ed8
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-ct6xj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-embed-certs-219333                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-px8l8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-embed-certs-219333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-embed-certs-219333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-gplbn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-embed-certs-219333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ffhlr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-vqjzg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node embed-certs-219333 event: Registered Node embed-certs-219333 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node embed-certs-219333 event: Registered Node embed-certs-219333 in Controller
	
	
	==> dmesg <==
	[Jan10 09:37] overlayfs: idmapped layers are currently not supported
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [23d9f7d67b99820f29a228986440deb42a7643b108034bd10629d2cd7e74d814] <==
	{"level":"info","ts":"2026-01-10T10:07:16.468905Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:07:16.469121Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:16.469131Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:16.469974Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T10:07:16.470034Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:07:16.470097Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:07:16.466396Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T10:07:17.225928Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:17.226081Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:17.226167Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:17.226212Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:17.226257Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.227723Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.227792Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:17.227836Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.227873Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:17.230183Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-219333 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:07:17.230277Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:17.230524Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:17.233242Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:17.238273Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:17.257566Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:07:17.268645Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:17.268691Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:17.316892Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:08:17 up  2:50,  0 user,  load average: 2.02, 1.80, 1.93
	Linux embed-certs-219333 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a8a35556f782af9945790a1d2ff1e2bbda5bb31002b8141b6ce3a4fa1c5845] <==
	I0110 10:07:21.425664       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:07:21.426077       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:07:21.426265       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:07:21.426317       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:07:21.426355       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:07:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:07:21.627189       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:07:21.627271       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:07:21.627314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:07:21.627583       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:07:51.627503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:07:51.627736       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 10:07:51.627827       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:07:51.627904       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0110 10:07:53.028028       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:07:53.028060       1 metrics.go:72] Registering metrics
	I0110 10:07:53.028133       1 controller.go:711] "Syncing nftables rules"
	I0110 10:08:01.626683       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:08:01.627649       1 main.go:301] handling current node
	I0110 10:08:11.627576       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 10:08:11.627611       1 main.go:301] handling current node
	
	
	==> kube-apiserver [34471ac06a0868183f7bbf12a60eede49ca6265f4f8b78f35058634a2296e139] <==
	I0110 10:07:19.999556       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:07:20.011988       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:20.011988       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 10:07:20.012081       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 10:07:20.012016       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 10:07:20.012028       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0110 10:07:20.018828       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 10:07:20.021968       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 10:07:20.035907       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:07:20.080211       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:20.080820       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:07:20.080864       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:20.080871       1 policy_source.go:248] refreshing policies
	I0110 10:07:20.128141       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:07:20.658294       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:07:20.712406       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:07:20.729258       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:07:20.868020       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:07:20.978874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:07:21.024467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:07:21.255073       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.43.176"}
	I0110 10:07:21.331402       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.238.158"}
	I0110 10:07:23.422784       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:07:23.702343       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:07:23.823846       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cd78f3af49f4d143a1ec414506ec5513f9eff8215806fa0cf31e02e797a439b2] <==
	I0110 10:07:23.166112       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:07:23.166117       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 10:07:23.166196       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166231       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.165912       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166424       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166518       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166595       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166637       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166671       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.166903       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.167596       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.165901       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.168770       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 10:07:23.168846       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-219333"
	I0110 10:07:23.168901       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 10:07:23.165822       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.165919       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.170844       1 range_allocator.go:177] "Sending events to api server"
	I0110 10:07:23.170977       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 10:07:23.170991       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:23.171003       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.171185       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.231957       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:23.733231       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [a259a4eaa5cdd0e3daddb79fd0994ee011e36fc39d0c7e6328c070219bb7520b] <==
	I0110 10:07:21.484697       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:07:21.593148       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:21.699831       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:21.699881       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:07:21.700135       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:07:21.720105       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:07:21.720160       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:07:21.735120       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:07:21.735620       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:07:21.735884       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:21.737295       1 config.go:200] "Starting service config controller"
	I0110 10:07:21.737362       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:07:21.737407       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:07:21.737451       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:07:21.737488       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:07:21.737523       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:07:21.747744       1 config.go:309] "Starting node config controller"
	I0110 10:07:21.747836       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:07:21.747867       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:07:21.838561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:07:21.838678       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:07:21.838692       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d60025e9eaf7adb52700e8aca2a8a63d05e321eb59e4e696674205d1577f88e6] <==
	I0110 10:07:18.479649       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:07:19.948245       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:07:19.948275       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:07:19.948283       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:07:19.948290       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:07:20.041559       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:07:20.041677       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:20.048362       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:07:20.048520       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:07:20.048532       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:20.048547       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:07:20.151511       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:07:35 embed-certs-219333 kubelet[792]: I0110 10:07:35.852678     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:35 embed-certs-219333 kubelet[792]: E0110 10:07:35.852902     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:36 embed-certs-219333 kubelet[792]: E0110 10:07:36.853896     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:36 embed-certs-219333 kubelet[792]: I0110 10:07:36.853935     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:36 embed-certs-219333 kubelet[792]: E0110 10:07:36.854097     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: E0110 10:07:46.621032     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: I0110 10:07:46.621089     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: I0110 10:07:46.879193     792 scope.go:122] "RemoveContainer" containerID="508438f4896fdd03ee7102ef03bc40d1ef33ab837062b8e4b1123888f1255d73"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: E0110 10:07:46.879540     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: I0110 10:07:46.879915     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:07:46 embed-certs-219333 kubelet[792]: E0110 10:07:46.880139     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:51 embed-certs-219333 kubelet[792]: I0110 10:07:51.905494     792 scope.go:122] "RemoveContainer" containerID="02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee"
	Jan 10 10:07:55 embed-certs-219333 kubelet[792]: E0110 10:07:55.802457     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:07:55 embed-certs-219333 kubelet[792]: I0110 10:07:55.803065     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:07:55 embed-certs-219333 kubelet[792]: E0110 10:07:55.803420     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:07:58 embed-certs-219333 kubelet[792]: E0110 10:07:58.853023     792 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ct6xj" containerName="coredns"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: E0110 10:08:07.620640     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: I0110 10:08:07.620698     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: I0110 10:08:07.964647     792 scope.go:122] "RemoveContainer" containerID="e333c0994bd094b4f09926a2339a2391d8ba418ddb9c670bef8218ec63556e16"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: E0110 10:08:07.965018     792 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: I0110 10:08:07.965047     792 scope.go:122] "RemoveContainer" containerID="1e050b12764e822915920f556d562dfa1787e1e5e7dd48055b51613f1f8b9c82"
	Jan 10 10:08:07 embed-certs-219333 kubelet[792]: E0110 10:08:07.965195     792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ffhlr_kubernetes-dashboard(37b3625d-3938-49e0-8da0-c4a88d83cacd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ffhlr" podUID="37b3625d-3938-49e0-8da0-c4a88d83cacd"
	Jan 10 10:08:12 embed-certs-219333 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:08:12 embed-certs-219333 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:08:12 embed-certs-219333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2b627d4e4087c16b83689a421c12c3fdc4bd39321c0bcfefeb33bbe33ccfbcbd] <==
	2026/01/10 10:07:29 Starting overwatch
	2026/01/10 10:07:29 Using namespace: kubernetes-dashboard
	2026/01/10 10:07:29 Using in-cluster config to connect to apiserver
	2026/01/10 10:07:29 Using secret token for csrf signing
	2026/01/10 10:07:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:07:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:07:29 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 10:07:29 Generating JWE encryption key
	2026/01/10 10:07:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:07:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:07:30 Initializing JWE encryption key from synchronized object
	2026/01/10 10:07:30 Creating in-cluster Sidecar client
	2026/01/10 10:07:30 Serving insecurely on HTTP port: 9090
	2026/01/10 10:07:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:08:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [02f011561bf27d692579b54ee785c828ef0f324698b8363d83bfb0f7df8245ee] <==
	I0110 10:07:21.347288       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:07:51.352681       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [5854ed490a60a78fb0f1c10a3e5218f7e00dd35bec31b251a72e8c796bb04abe] <==
	I0110 10:07:52.012056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:07:52.035456       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:07:52.035606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:07:52.038527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:55.495640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:07:59.756378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:03.360794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:06.414793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:09.436320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:09.443227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:09.443479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:08:09.443633       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-219333_50b4f215-e74a-4fb5-893b-6b4159b01f30!
	I0110 10:08:09.443955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17aa7d93-7fb8-45e3-85a5-4943a2914558", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-219333_50b4f215-e74a-4fb5-893b-6b4159b01f30 became leader
	W0110 10:08:09.454424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:09.457924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:09.544453       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-219333_50b4f215-e74a-4fb5-893b-6b4159b01f30!
	W0110 10:08:11.460462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:11.472946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:13.480747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:13.488554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:15.491971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:15.496862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:17.502168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:17.507252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-219333 -n embed-certs-219333
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-219333 -n embed-certs-219333: exit status 2 (384.409952ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-219333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-820203 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-820203 --alsologtostderr -v=1: exit status 80 (2.743110472s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-820203 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 10:08:41.789814  529953 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:08:41.790032  529953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:41.790059  529953 out.go:374] Setting ErrFile to fd 2...
	I0110 10:08:41.790079  529953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:41.790388  529953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:08:41.790720  529953 out.go:368] Setting JSON to false
	I0110 10:08:41.790776  529953 mustload.go:66] Loading cluster: default-k8s-diff-port-820203
	I0110 10:08:41.791237  529953 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:41.791772  529953 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-820203 --format={{.State.Status}}
	I0110 10:08:41.824955  529953 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:08:41.825298  529953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:41.920002  529953 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2026-01-10 10:08:41.905878014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:41.920748  529953 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-820203 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 10:08:41.924235  529953 out.go:179] * Pausing node default-k8s-diff-port-820203 ... 
	I0110 10:08:41.927052  529953 host.go:66] Checking if "default-k8s-diff-port-820203" exists ...
	I0110 10:08:41.927382  529953 ssh_runner.go:195] Run: systemctl --version
	I0110 10:08:41.927435  529953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-820203
	I0110 10:08:41.957142  529953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33454 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/default-k8s-diff-port-820203/id_rsa Username:docker}
	I0110 10:08:42.093060  529953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:08:42.119742  529953 pause.go:52] kubelet running: true
	I0110 10:08:42.119847  529953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:08:42.523688  529953 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:08:42.523785  529953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:08:42.685994  529953 cri.go:96] found id: "6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795"
	I0110 10:08:42.686023  529953 cri.go:96] found id: "248e711dd6b8d17c98067727b2ed611fce6c5d304e26a6362e3527a9e6d612a7"
	I0110 10:08:42.686029  529953 cri.go:96] found id: "7fa9d7cc6055cc2e1f6692c6ebf8145ae5267292a0c2ea1668696d165b3268f0"
	I0110 10:08:42.686032  529953 cri.go:96] found id: "ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429"
	I0110 10:08:42.686036  529953 cri.go:96] found id: "f1a26bb6b7a3f3120f63aa290ec0bc44dd75c300ebd78d7f1e5f7235e903809a"
	I0110 10:08:42.686039  529953 cri.go:96] found id: "c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde"
	I0110 10:08:42.686042  529953 cri.go:96] found id: "9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7"
	I0110 10:08:42.686045  529953 cri.go:96] found id: "812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47"
	I0110 10:08:42.686049  529953 cri.go:96] found id: "91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9"
	I0110 10:08:42.686055  529953 cri.go:96] found id: "6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203"
	I0110 10:08:42.686065  529953 cri.go:96] found id: "c49d3ac474b567cf49fa11e878d888eeb4e94ff1b550c07a0dba4f375ccc7359"
	I0110 10:08:42.686072  529953 cri.go:96] found id: ""
	I0110 10:08:42.686120  529953 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:08:42.701832  529953 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:42Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:08:43.025255  529953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:08:43.042660  529953 pause.go:52] kubelet running: false
	I0110 10:08:43.042739  529953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:08:43.347153  529953 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:08:43.347241  529953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:08:43.471739  529953 cri.go:96] found id: "6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795"
	I0110 10:08:43.471774  529953 cri.go:96] found id: "248e711dd6b8d17c98067727b2ed611fce6c5d304e26a6362e3527a9e6d612a7"
	I0110 10:08:43.471781  529953 cri.go:96] found id: "7fa9d7cc6055cc2e1f6692c6ebf8145ae5267292a0c2ea1668696d165b3268f0"
	I0110 10:08:43.471785  529953 cri.go:96] found id: "ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429"
	I0110 10:08:43.471788  529953 cri.go:96] found id: "f1a26bb6b7a3f3120f63aa290ec0bc44dd75c300ebd78d7f1e5f7235e903809a"
	I0110 10:08:43.471792  529953 cri.go:96] found id: "c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde"
	I0110 10:08:43.471795  529953 cri.go:96] found id: "9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7"
	I0110 10:08:43.471798  529953 cri.go:96] found id: "812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47"
	I0110 10:08:43.471802  529953 cri.go:96] found id: "91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9"
	I0110 10:08:43.471808  529953 cri.go:96] found id: "6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203"
	I0110 10:08:43.471811  529953 cri.go:96] found id: "c49d3ac474b567cf49fa11e878d888eeb4e94ff1b550c07a0dba4f375ccc7359"
	I0110 10:08:43.471831  529953 cri.go:96] found id: ""
	I0110 10:08:43.471889  529953 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:08:44.005738  529953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:08:44.023956  529953 pause.go:52] kubelet running: false
	I0110 10:08:44.024033  529953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:08:44.260477  529953 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:08:44.260589  529953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:08:44.389848  529953 cri.go:96] found id: "6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795"
	I0110 10:08:44.389891  529953 cri.go:96] found id: "248e711dd6b8d17c98067727b2ed611fce6c5d304e26a6362e3527a9e6d612a7"
	I0110 10:08:44.389897  529953 cri.go:96] found id: "7fa9d7cc6055cc2e1f6692c6ebf8145ae5267292a0c2ea1668696d165b3268f0"
	I0110 10:08:44.389901  529953 cri.go:96] found id: "ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429"
	I0110 10:08:44.389904  529953 cri.go:96] found id: "f1a26bb6b7a3f3120f63aa290ec0bc44dd75c300ebd78d7f1e5f7235e903809a"
	I0110 10:08:44.389908  529953 cri.go:96] found id: "c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde"
	I0110 10:08:44.389912  529953 cri.go:96] found id: "9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7"
	I0110 10:08:44.389915  529953 cri.go:96] found id: "812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47"
	I0110 10:08:44.389918  529953 cri.go:96] found id: "91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9"
	I0110 10:08:44.389924  529953 cri.go:96] found id: "6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203"
	I0110 10:08:44.389928  529953 cri.go:96] found id: "c49d3ac474b567cf49fa11e878d888eeb4e94ff1b550c07a0dba4f375ccc7359"
	I0110 10:08:44.389931  529953 cri.go:96] found id: ""
	I0110 10:08:44.390002  529953 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:08:44.409557  529953 out.go:203] 
	W0110 10:08:44.412185  529953 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 10:08:44.412221  529953 out.go:285] * 
	* 
	W0110 10:08:44.416934  529953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:08:44.419502  529953 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-820203 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-820203
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-820203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08",
	        "Created": "2026-01-10T10:06:35.311708414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 524323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:07:39.594720586Z",
	            "FinishedAt": "2026-01-10T10:07:38.199540668Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/hostname",
	        "HostsPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/hosts",
	        "LogPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08-json.log",
	        "Name": "/default-k8s-diff-port-820203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-820203:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-820203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08",
	                "LowerDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-820203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-820203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-820203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-820203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-820203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86e6f0e86181235e4a1bf355123725afac28bc06e9d9cc35a3a8619792c76785",
	            "SandboxKey": "/var/run/docker/netns/86e6f0e86181",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-820203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:74:25:5f:39:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6955d7ca364871106ab81e8846bbb3fa5f63fcfbf0bbc67db73305008bd736d",
	                    "EndpointID": "98b01d2c205388360cc93f0b9dbdedfc339d3e0759ea9204c5aabc374df29485",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-820203",
	                        "72463dca0fe3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203: exit status 2 (467.066096ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-820203 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-820203 logs -n 25: (1.774359545s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                                                                                                  │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                                                                                               │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ stop    │ -p embed-certs-219333 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                                                                                                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984            │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ image   │ default-k8s-diff-port-820203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p default-k8s-diff-port-820203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:08:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:08:22.146423  527825 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:08:22.146576  527825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:22.146601  527825 out.go:374] Setting ErrFile to fd 2...
	I0110 10:08:22.146615  527825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:22.147060  527825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:08:22.147614  527825 out.go:368] Setting JSON to false
	I0110 10:08:22.149011  527825 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10252,"bootTime":1768029451,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:08:22.149344  527825 start.go:143] virtualization:  
	I0110 10:08:22.153036  527825 out.go:179] * [newest-cni-474984] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:08:22.157147  527825 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:08:22.157237  527825 notify.go:221] Checking for updates...
	I0110 10:08:22.161129  527825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:08:22.164052  527825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:08:22.166888  527825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:08:22.169866  527825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:08:22.172739  527825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:08:22.176105  527825 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:22.176229  527825 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:08:22.212299  527825 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:08:22.212446  527825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:22.276037  527825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:08:22.266250185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:22.276149  527825 docker.go:319] overlay module found
	I0110 10:08:22.279311  527825 out.go:179] * Using the docker driver based on user configuration
	I0110 10:08:22.282261  527825 start.go:309] selected driver: docker
	I0110 10:08:22.282281  527825 start.go:928] validating driver "docker" against <nil>
	I0110 10:08:22.282295  527825 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:08:22.283046  527825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:22.338486  527825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:08:22.329009505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:22.338655  527825 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 10:08:22.338685  527825 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 10:08:22.338915  527825 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:08:22.342015  527825 out.go:179] * Using Docker driver with root privileges
	I0110 10:08:22.344896  527825 cni.go:84] Creating CNI manager for ""
	I0110 10:08:22.344964  527825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:08:22.344978  527825 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:08:22.345061  527825 start.go:353] cluster config:
	{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:08:22.350041  527825 out.go:179] * Starting "newest-cni-474984" primary control-plane node in "newest-cni-474984" cluster
	I0110 10:08:22.352873  527825 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:08:22.355735  527825 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:08:22.358593  527825 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:08:22.358607  527825 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:08:22.358649  527825 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:08:22.358707  527825 cache.go:65] Caching tarball of preloaded images
	I0110 10:08:22.358794  527825 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:08:22.358806  527825 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:08:22.358950  527825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:08:22.358973  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json: {Name:mkcb69c8502d9a46ea5e77ecbcee5b08c7fc7f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:22.383111  527825 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:08:22.383131  527825 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:08:22.383148  527825 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:08:22.383184  527825 start.go:360] acquireMachinesLock for newest-cni-474984: {Name:mk0515f3568da12603bdab21609a1a4ed360d8a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:08:22.383287  527825 start.go:364] duration metric: took 88.345µs to acquireMachinesLock for "newest-cni-474984"
	I0110 10:08:22.383311  527825 start.go:93] Provisioning new machine with config: &{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:08:22.383380  527825 start.go:125] createHost starting for "" (driver="docker")
	W0110 10:08:20.890293  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:22.890627  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	I0110 10:08:22.389162  527825 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:08:22.389396  527825 start.go:159] libmachine.API.Create for "newest-cni-474984" (driver="docker")
	I0110 10:08:22.389433  527825 client.go:173] LocalClient.Create starting
	I0110 10:08:22.389500  527825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:08:22.389538  527825 main.go:144] libmachine: Decoding PEM data...
	I0110 10:08:22.389555  527825 main.go:144] libmachine: Parsing certificate...
	I0110 10:08:22.389610  527825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:08:22.389632  527825 main.go:144] libmachine: Decoding PEM data...
	I0110 10:08:22.389644  527825 main.go:144] libmachine: Parsing certificate...
	I0110 10:08:22.389996  527825 cli_runner.go:164] Run: docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:08:22.405471  527825 cli_runner.go:211] docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:08:22.405550  527825 network_create.go:284] running [docker network inspect newest-cni-474984] to gather additional debugging logs...
	I0110 10:08:22.405572  527825 cli_runner.go:164] Run: docker network inspect newest-cni-474984
	W0110 10:08:22.425978  527825 cli_runner.go:211] docker network inspect newest-cni-474984 returned with exit code 1
	I0110 10:08:22.426007  527825 network_create.go:287] error running [docker network inspect newest-cni-474984]: docker network inspect newest-cni-474984: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-474984 not found
	I0110 10:08:22.426027  527825 network_create.go:289] output of [docker network inspect newest-cni-474984]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-474984 not found
	
	** /stderr **
	I0110 10:08:22.426131  527825 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:08:22.443167  527825 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:08:22.443664  527825 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:08:22.443934  527825 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:08:22.444371  527825 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a33770}
	I0110 10:08:22.444397  527825 network_create.go:124] attempt to create docker network newest-cni-474984 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 10:08:22.444460  527825 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-474984 newest-cni-474984
	I0110 10:08:22.510396  527825 network_create.go:108] docker network newest-cni-474984 192.168.76.0/24 created
	I0110 10:08:22.510431  527825 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-474984" container
	I0110 10:08:22.510513  527825 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:08:22.526751  527825 cli_runner.go:164] Run: docker volume create newest-cni-474984 --label name.minikube.sigs.k8s.io=newest-cni-474984 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:08:22.544321  527825 oci.go:103] Successfully created a docker volume newest-cni-474984
	I0110 10:08:22.544405  527825 cli_runner.go:164] Run: docker run --rm --name newest-cni-474984-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-474984 --entrypoint /usr/bin/test -v newest-cni-474984:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:08:23.122707  527825 oci.go:107] Successfully prepared a docker volume newest-cni-474984
	I0110 10:08:23.122784  527825 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:08:23.122795  527825 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:08:23.122870  527825 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-474984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:08:27.014128  527825 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-474984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.891194697s)
	I0110 10:08:27.014172  527825 kic.go:203] duration metric: took 3.891372642s to extract preloaded images to volume ...
	W0110 10:08:27.014321  527825 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:08:27.014437  527825 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:08:27.083657  527825 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-474984 --name newest-cni-474984 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-474984 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-474984 --network newest-cni-474984 --ip 192.168.76.2 --volume newest-cni-474984:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	W0110 10:08:25.391529  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:27.892932  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	I0110 10:08:28.388297  524195 pod_ready.go:94] pod "coredns-7d764666f9-5kgtf" is "Ready"
	I0110 10:08:28.388329  524195 pod_ready.go:86] duration metric: took 35.505048353s for pod "coredns-7d764666f9-5kgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.390856  524195 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.395101  524195 pod_ready.go:94] pod "etcd-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:28.395181  524195 pod_ready.go:86] duration metric: took 4.295822ms for pod "etcd-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.397263  524195 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.401405  524195 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:28.401436  524195 pod_ready.go:86] duration metric: took 4.144585ms for pod "kube-apiserver-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.403785  524195 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.586318  524195 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:28.586347  524195 pod_ready.go:86] duration metric: took 182.531757ms for pod "kube-controller-manager-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.786541  524195 pod_ready.go:83] waiting for pod "kube-proxy-h677z" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.187098  524195 pod_ready.go:94] pod "kube-proxy-h677z" is "Ready"
	I0110 10:08:29.187128  524195 pod_ready.go:86] duration metric: took 400.560823ms for pod "kube-proxy-h677z" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.387390  524195 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.786652  524195 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:29.786680  524195 pod_ready.go:86] duration metric: took 399.263667ms for pod "kube-scheduler-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.786693  524195 pod_ready.go:40] duration metric: took 36.908123471s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:08:29.844610  524195 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:08:29.847572  524195 out.go:203] 
	W0110 10:08:29.850524  524195 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:08:29.853437  524195 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:08:29.856404  524195 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-820203" cluster and "default" namespace by default
	I0110 10:08:27.402837  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Running}}
	I0110 10:08:27.422099  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:27.446906  527825 cli_runner.go:164] Run: docker exec newest-cni-474984 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:08:27.522818  527825 oci.go:144] the created container "newest-cni-474984" has a running status.
	I0110 10:08:27.522855  527825 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa...
	I0110 10:08:27.993326  527825 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:08:28.021316  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:28.042660  527825 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:08:28.042686  527825 kic_runner.go:114] Args: [docker exec --privileged newest-cni-474984 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:08:28.084118  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:28.108302  527825 machine.go:94] provisionDockerMachine start ...
	I0110 10:08:28.108396  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:28.126638  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:28.126984  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:28.127000  527825 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:08:28.127601  527825 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49438->127.0.0.1:33459: read: connection reset by peer
	I0110 10:08:31.284642  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:08:31.284667  527825 ubuntu.go:182] provisioning hostname "newest-cni-474984"
	I0110 10:08:31.284759  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:31.304321  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:31.304698  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:31.304714  527825 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-474984 && echo "newest-cni-474984" | sudo tee /etc/hostname
	I0110 10:08:31.475094  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:08:31.475170  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:31.494093  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:31.494430  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:31.494455  527825 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-474984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-474984/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-474984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:08:31.644665  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:08:31.644693  527825 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:08:31.644719  527825 ubuntu.go:190] setting up certificates
	I0110 10:08:31.644729  527825 provision.go:84] configureAuth start
	I0110 10:08:31.644789  527825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:08:31.661786  527825 provision.go:143] copyHostCerts
	I0110 10:08:31.661858  527825 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:08:31.661872  527825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:08:31.661953  527825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:08:31.662056  527825 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:08:31.662068  527825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:08:31.662099  527825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:08:31.662186  527825 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:08:31.662197  527825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:08:31.662226  527825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:08:31.662295  527825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.newest-cni-474984 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-474984]
	I0110 10:08:32.100875  527825 provision.go:177] copyRemoteCerts
	I0110 10:08:32.100973  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:08:32.101049  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.118733  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:32.220360  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:08:32.239658  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:08:32.259607  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:08:32.278441  527825 provision.go:87] duration metric: took 633.688159ms to configureAuth
	I0110 10:08:32.278472  527825 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:08:32.278673  527825 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:32.278785  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.298521  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:32.298870  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:32.298884  527825 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:08:32.711560  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:08:32.711589  527825 machine.go:97] duration metric: took 4.603262136s to provisionDockerMachine
	I0110 10:08:32.711600  527825 client.go:176] duration metric: took 10.322156275s to LocalClient.Create
	I0110 10:08:32.711619  527825 start.go:167] duration metric: took 10.32222313s to libmachine.API.Create "newest-cni-474984"
	I0110 10:08:32.711634  527825 start.go:293] postStartSetup for "newest-cni-474984" (driver="docker")
	I0110 10:08:32.711645  527825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:08:32.711708  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:08:32.711762  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.728367  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:32.832570  527825 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:08:32.838490  527825 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:08:32.838566  527825 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:08:32.838595  527825 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:08:32.838677  527825 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:08:32.838810  527825 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:08:32.838987  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:08:32.850938  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:08:32.870704  527825 start.go:296] duration metric: took 159.055637ms for postStartSetup
	I0110 10:08:32.871091  527825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:08:32.888132  527825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:08:32.888406  527825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:08:32.888456  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.904904  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:33.011613  527825 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:08:33.017363  527825 start.go:128] duration metric: took 10.633968529s to createHost
	I0110 10:08:33.017389  527825 start.go:83] releasing machines lock for "newest-cni-474984", held for 10.634094133s
	I0110 10:08:33.017473  527825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:08:33.034568  527825 ssh_runner.go:195] Run: cat /version.json
	I0110 10:08:33.034619  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:33.034910  527825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:08:33.034965  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:33.058112  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:33.068762  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:33.274754  527825 ssh_runner.go:195] Run: systemctl --version
	I0110 10:08:33.281449  527825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:08:33.321146  527825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:08:33.325799  527825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:08:33.325869  527825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:08:33.358024  527825 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:08:33.358047  527825 start.go:496] detecting cgroup driver to use...
	I0110 10:08:33.358080  527825 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:08:33.358132  527825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:08:33.375752  527825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:08:33.389629  527825 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:08:33.389725  527825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:08:33.406725  527825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:08:33.425868  527825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:08:33.568578  527825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:08:33.696777  527825 docker.go:234] disabling docker service ...
	I0110 10:08:33.696874  527825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:08:33.718412  527825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:08:33.733486  527825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:08:33.855146  527825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:08:33.980055  527825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:08:33.993298  527825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:08:34.009234  527825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:08:34.009373  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.019335  527825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:08:34.019428  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.028686  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.038842  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.048981  527825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:08:34.058256  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.068269  527825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.083733  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.092589  527825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:08:34.100969  527825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:08:34.108488  527825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:08:34.230025  527825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:08:34.405632  527825 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:08:34.405743  527825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:08:34.410340  527825 start.go:574] Will wait 60s for crictl version
	I0110 10:08:34.410477  527825 ssh_runner.go:195] Run: which crictl
	I0110 10:08:34.414398  527825 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:08:34.439733  527825 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:08:34.439866  527825 ssh_runner.go:195] Run: crio --version
	I0110 10:08:34.476277  527825 ssh_runner.go:195] Run: crio --version
	I0110 10:08:34.511520  527825 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:08:34.514424  527825 cli_runner.go:164] Run: docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:08:34.534982  527825 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:08:34.539078  527825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:08:34.552098  527825 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 10:08:34.554885  527825 kubeadm.go:884] updating cluster {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:08:34.555014  527825 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:08:34.555094  527825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:08:34.592857  527825 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:08:34.592879  527825 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:08:34.592943  527825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:08:34.622915  527825 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:08:34.622936  527825 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:08:34.622946  527825 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:08:34.623034  527825 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-474984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:08:34.623126  527825 ssh_runner.go:195] Run: crio config
	I0110 10:08:34.686994  527825 cni.go:84] Creating CNI manager for ""
	I0110 10:08:34.687020  527825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:08:34.687044  527825 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 10:08:34.687074  527825 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-474984 NodeName:newest-cni-474984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:08:34.687226  527825 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-474984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:08:34.687300  527825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:08:34.697113  527825 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:08:34.697237  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:08:34.704917  527825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:08:34.718899  527825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:08:34.732924  527825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 10:08:34.746833  527825 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:08:34.750525  527825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:08:34.760214  527825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:08:34.871406  527825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:08:34.888727  527825 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984 for IP: 192.168.76.2
	I0110 10:08:34.888750  527825 certs.go:195] generating shared ca certs ...
	I0110 10:08:34.888768  527825 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:34.888913  527825 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:08:34.888965  527825 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:08:34.888973  527825 certs.go:257] generating profile certs ...
	I0110 10:08:34.889026  527825 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key
	I0110 10:08:34.889054  527825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.crt with IP's: []
	I0110 10:08:35.183183  527825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.crt ...
	I0110 10:08:35.183218  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.crt: {Name:mk7b4d0de44caf1237e6eb083960f54b622a5c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.183422  527825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key ...
	I0110 10:08:35.183435  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key: {Name:mk11245467da1bac33fac3d275cc47339df26572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.183545  527825 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993
	I0110 10:08:35.183562  527825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 10:08:35.347315  527825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993 ...
	I0110 10:08:35.347347  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993: {Name:mkdc93a78831f87e91e31eb8e9e04c917c9e3483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.347536  527825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993 ...
	I0110 10:08:35.347550  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993: {Name:mkb57143d5d90fd749c80c0a1bfd69e4d7a3e03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.347637  527825 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt
	I0110 10:08:35.347719  527825 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key
	I0110 10:08:35.347780  527825 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key
	I0110 10:08:35.347798  527825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt with IP's: []
	I0110 10:08:35.744305  527825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt ...
	I0110 10:08:35.744336  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt: {Name:mk3640570b1ee09e5fb7441e05560bb5a83443c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.744535  527825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key ...
	I0110 10:08:35.744547  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key: {Name:mk29baa696dd389b8ae263ce69864e5c9f0229f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.744740  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:08:35.744788  527825 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:08:35.744802  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:08:35.744829  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:08:35.744861  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:08:35.744887  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:08:35.744935  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:08:35.745557  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:08:35.764030  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:08:35.782903  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:08:35.800655  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:08:35.818657  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:08:35.836423  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 10:08:35.855885  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:08:35.874234  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 10:08:35.891845  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:08:35.908992  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:08:35.926625  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:08:35.945318  527825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:08:35.959774  527825 ssh_runner.go:195] Run: openssl version
	I0110 10:08:35.968153  527825 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:08:35.975768  527825 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:08:35.984166  527825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:08:35.987869  527825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:08:35.987935  527825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:08:36.029981  527825 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:08:36.038145  527825 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:08:36.046125  527825 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.053831  527825 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:08:36.061975  527825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.066038  527825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.066107  527825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.107424  527825 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:08:36.114876  527825 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:08:36.122557  527825 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.130426  527825 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:08:36.138300  527825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.142233  527825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.142337  527825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.183554  527825 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:08:36.191110  527825 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:08:36.198590  527825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:08:36.202613  527825 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:08:36.202716  527825 kubeadm.go:401] StartCluster: {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:08:36.202806  527825 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:08:36.202866  527825 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:08:36.232340  527825 cri.go:96] found id: ""
	I0110 10:08:36.232415  527825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:08:36.240998  527825 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:08:36.249270  527825 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:08:36.249372  527825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:08:36.257694  527825 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:08:36.257717  527825 kubeadm.go:158] found existing configuration files:
	
	I0110 10:08:36.257816  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:08:36.266300  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:08:36.266410  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:08:36.274127  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:08:36.281966  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:08:36.282038  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:08:36.291202  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:08:36.301411  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:08:36.301483  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:08:36.309209  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:08:36.317001  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:08:36.317092  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:08:36.324478  527825 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:08:36.368847  527825 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:08:36.369218  527825 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:08:36.452920  527825 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:08:36.453059  527825 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:08:36.453127  527825 kubeadm.go:319] OS: Linux
	I0110 10:08:36.453212  527825 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:08:36.453294  527825 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:08:36.453369  527825 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:08:36.453453  527825 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:08:36.453526  527825 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:08:36.453609  527825 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:08:36.453679  527825 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:08:36.453758  527825 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:08:36.453831  527825 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:08:36.527310  527825 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:08:36.527485  527825 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:08:36.527598  527825 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:08:36.535513  527825 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:08:36.542258  527825 out.go:252]   - Generating certificates and keys ...
	I0110 10:08:36.542428  527825 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:08:36.542541  527825 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:08:36.771725  527825 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:08:37.215703  527825 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:08:37.310671  527825 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:08:37.506271  527825 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:08:37.810543  527825 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:08:37.810909  527825 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-474984] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:08:37.960977  527825 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:08:37.961182  527825 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-474984] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:08:38.216380  527825 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:08:38.641321  527825 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:08:38.717281  527825 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:08:38.717603  527825 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:08:38.860205  527825 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:08:39.146319  527825 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:08:39.804040  527825 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:08:39.995343  527825 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:08:40.396771  527825 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:08:40.397510  527825 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:08:40.400173  527825 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:08:40.405664  527825 out.go:252]   - Booting up control plane ...
	I0110 10:08:40.405837  527825 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:08:40.405953  527825 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:08:40.406041  527825 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:08:40.422368  527825 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:08:40.422910  527825 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:08:40.431913  527825 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:08:40.432783  527825 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:08:40.433039  527825 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:08:40.569264  527825 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:08:40.569433  527825 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:08:41.072799  527825 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.862247ms
	I0110 10:08:41.072962  527825 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 10:08:41.073082  527825 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 10:08:41.073210  527825 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 10:08:41.073320  527825 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Jan 10 10:08:23 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:23.315306981Z" level=info msg="Started container" PID=1697 containerID=6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795 description=kube-system/storage-provisioner/storage-provisioner id=343e7bc1-8e24-4fd9-adf9-8e18d45f123e name=/runtime.v1.RuntimeService/StartContainer sandboxID=26c60f23d70559a511b8eb11c300f04fb883fb39c7fa72e74f6981e63aa5c211
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.836210185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.836870528Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.842885846Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.843028723Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.847232976Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.84726729Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.851631823Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.851665242Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.85168916Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.859779217Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.859812784Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.060402295Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9125a270-70f5-4213-895d-524bf6e5b107 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.064649002Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=215cd002-5b37-4932-a6d5-1b20b3508ebf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.06604492Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper" id=ca3e696e-28c4-44c7-ac4b-99dab1f35992 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.06619939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.078586797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.079543583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.099611823Z" level=info msg="Created container 6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper" id=ca3e696e-28c4-44c7-ac4b-99dab1f35992 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.105306592Z" level=info msg="Starting container: 6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203" id=733653aa-d3d7-43a1-885f-b41fd11c301a name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.109755892Z" level=info msg="Started container" PID=1778 containerID=6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper id=733653aa-d3d7-43a1-885f-b41fd11c301a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f303a6ae6fe27e94e07ede400ef920df6bc1ee61306999684233ca067ceffe5
	Jan 10 10:08:38 default-k8s-diff-port-820203 conmon[1776]: conmon 6a489a7e9368f7ee0254 <ninfo>: container 1778 exited with status 1
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.298401445Z" level=info msg="Removing container: bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4" id=29309262-30e8-4ad8-909f-1a8e5fcecb05 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.30670546Z" level=info msg="Error loading conmon cgroup of container bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4: cgroup deleted" id=29309262-30e8-4ad8-909f-1a8e5fcecb05 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.309641509Z" level=info msg="Removed container bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper" id=29309262-30e8-4ad8-909f-1a8e5fcecb05 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	6a489a7e9368f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   8f303a6ae6fe2       dashboard-metrics-scraper-867fb5f87b-47kqw             kubernetes-dashboard
	6a5cc272c2a2c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago      Running             storage-provisioner         2                   26c60f23d7055       storage-provisioner                                    kube-system
	c49d3ac474b56       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago      Running             kubernetes-dashboard        0                   b4f9b9b0b7e58       kubernetes-dashboard-b84665fb8-sd2l8                   kubernetes-dashboard
	248e711dd6b8d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   dd8996126c524       coredns-7d764666f9-5kgtf                               kube-system
	a6169d1727208       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   b5336fc0e5737       busybox                                                default
	7fa9d7cc6055c       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   0aee706a394a0       kube-proxy-h677z                                       kube-system
	ddddfc6349877       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago      Exited              storage-provisioner         1                   26c60f23d7055       storage-provisioner                                    kube-system
	f1a26bb6b7a3f       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   7156a00a95d26       kindnet-kg5mk                                          kube-system
	c8a0479b8f6a6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           58 seconds ago      Running             kube-controller-manager     1                   c583b17905c05       kube-controller-manager-default-k8s-diff-port-820203   kube-system
	9ca4c73ec1b58       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           58 seconds ago      Running             kube-scheduler              1                   03b1687e4066f       kube-scheduler-default-k8s-diff-port-820203            kube-system
	812d4c4e5e7a1       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           58 seconds ago      Running             etcd                        1                   0325b8ea3f231       etcd-default-k8s-diff-port-820203                      kube-system
	91bbce93fe2f1       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           58 seconds ago      Running             kube-apiserver              1                   cff8092f62bf4       kube-apiserver-default-k8s-diff-port-820203            kube-system
	
	
	==> coredns [248e711dd6b8d17c98067727b2ed611fce6c5d304e26a6362e3527a9e6d612a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38946 - 19880 "HINFO IN 6707206200389323888.944827970982932827. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012293465s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-820203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-820203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=default-k8s-diff-port-820203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_06_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-820203
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:08:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:07:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-820203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                3458310f-51b7-4cba-9b86-ae28b618509b
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-5kgtf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-default-k8s-diff-port-820203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-kg5mk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-820203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-820203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-h677z                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-820203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-47kqw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-sd2l8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node default-k8s-diff-port-820203 event: Registered Node default-k8s-diff-port-820203 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node default-k8s-diff-port-820203 event: Registered Node default-k8s-diff-port-820203 in Controller
	
	
	==> dmesg <==
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	[Jan10 10:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47] <==
	{"level":"info","ts":"2026-01-10T10:07:48.199602Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:07:48.199611Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:07:48.200017Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:48.200029Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:48.215394Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-10T10:07:48.215469Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:07:48.215530Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:07:48.908541Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:48.908677Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:48.908757Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:48.908824Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:48.908866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.912535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.912601Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:48.912644Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.912680Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.914977Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-820203 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:07:48.915165Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:48.916150Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:48.918213Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T10:07:48.920532Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:48.923942Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:48.923974Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:48.930176Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:48.934636Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:08:46 up  2:51,  0 user,  load average: 3.74, 2.27, 2.09
	Linux default-k8s-diff-port-820203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a26bb6b7a3f3120f63aa290ec0bc44dd75c300ebd78d7f1e5f7235e903809a] <==
	I0110 10:07:52.639182       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:07:52.645394       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 10:07:52.645631       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:07:52.645944       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:07:52.645999       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:07:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:07:52.828311       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:07:52.828401       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:07:52.828435       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:07:52.829366       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:08:22.830836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 10:08:22.830954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:08:22.831123       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:08:22.832407       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 10:08:24.129075       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:08:24.129111       1 metrics.go:72] Registering metrics
	I0110 10:08:24.129178       1 controller.go:711] "Syncing nftables rules"
	I0110 10:08:32.828664       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 10:08:32.829305       1 main.go:301] handling current node
	I0110 10:08:42.827950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 10:08:42.827989       1 main.go:301] handling current node
	
	
	==> kube-apiserver [91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9] <==
	I0110 10:07:51.247846       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 10:07:51.255956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 10:07:51.259923       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:07:51.262123       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:51.262160       1 policy_source.go:248] refreshing policies
	I0110 10:07:51.264123       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:07:51.267671       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 10:07:51.267727       1 aggregator.go:187] initial CRD sync complete...
	I0110 10:07:51.267754       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 10:07:51.267761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:07:51.267770       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:07:51.272213       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:07:51.283816       1 shared_informer.go:377] "Caches are synced"
	E0110 10:07:51.287100       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 10:07:51.801632       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:07:51.853108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:07:51.855036       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:07:51.898111       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:07:51.928222       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:07:51.972897       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:07:52.194852       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.85.105"}
	I0110 10:07:52.295490       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.158.241"}
	I0110 10:07:54.862924       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:07:54.968382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:07:55.027975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde] <==
	I0110 10:07:54.443836       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.443934       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.443992       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444047       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444384       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444920       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444954       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445022       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-820203"
	I0110 10:07:54.445080       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445436       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445469       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445664       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.450154       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.450390       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.450445       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451529       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 10:07:54.451629       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451672       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451742       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451784       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.452487       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.523244       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.544755       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.544779       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:07:54.544785       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [7fa9d7cc6055cc2e1f6692c6ebf8145ae5267292a0c2ea1668696d165b3268f0] <==
	I0110 10:07:52.665628       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:07:52.802325       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:52.903463       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:52.903517       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 10:07:52.903598       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:07:52.944967       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:07:52.945025       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:07:52.949063       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:07:52.949372       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:07:52.949398       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:52.951147       1 config.go:200] "Starting service config controller"
	I0110 10:07:52.951174       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:07:52.951195       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:07:52.951199       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:07:52.951214       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:07:52.951225       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:07:52.951876       1 config.go:309] "Starting node config controller"
	I0110 10:07:52.951895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:07:52.951902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:07:53.052004       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:07:53.052004       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:07:53.052036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7] <==
	I0110 10:07:48.878715       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:07:51.176231       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:07:51.176260       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:07:51.176292       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:07:51.176300       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:07:51.266361       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:07:51.266389       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:51.270638       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:07:51.270769       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:07:51.270783       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:51.270805       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:07:51.371226       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:08:07 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:07.213464     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:07 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:07.213688     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:08 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:08.216877     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:08 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:08.216919     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:08 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:08.217070     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:16 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:16.512068     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:16 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:16.512112     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:17.238187     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:17.238465     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:17.238492     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:17.238638     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:23 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:23.254222     791 scope.go:122] "RemoveContainer" containerID="ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429"
	Jan 10 10:08:26 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:26.512041     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:26 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:26.512094     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:26 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:26.512257     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:27 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:27.901013     791 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5kgtf" containerName="coredns"
	Jan 10 10:08:38 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:38.059192     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:38 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:38.059671     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:38 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:38.296988     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:39 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:39.301784     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:39 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:39.302249     791 scope.go:122] "RemoveContainer" containerID="6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203"
	Jan 10 10:08:39 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:39.302524     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:42 default-k8s-diff-port-820203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:08:42 default-k8s-diff-port-820203 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:08:42 default-k8s-diff-port-820203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c49d3ac474b567cf49fa11e878d888eeb4e94ff1b550c07a0dba4f375ccc7359] <==
	2026/01/10 10:08:01 Using namespace: kubernetes-dashboard
	2026/01/10 10:08:01 Using in-cluster config to connect to apiserver
	2026/01/10 10:08:01 Using secret token for csrf signing
	2026/01/10 10:08:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:08:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:08:01 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 10:08:01 Generating JWE encryption key
	2026/01/10 10:08:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:08:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:08:01 Initializing JWE encryption key from synchronized object
	2026/01/10 10:08:01 Creating in-cluster Sidecar client
	2026/01/10 10:08:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:08:01 Serving insecurely on HTTP port: 9090
	2026/01/10 10:08:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:08:01 Starting overwatch
	
	
	==> storage-provisioner [6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795] <==
	I0110 10:08:23.331684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:08:23.365905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:08:23.365968       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:08:23.368435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:26.825049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:31.084921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:34.687334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:37.741504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:40.764683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:40.773142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:40.773590       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:08:40.773683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e73b981a-6c80-4d85-b5f4-5190b80286fa", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-820203_cac72664-7a51-4dc9-9465-f96fa6ed5e25 became leader
	I0110 10:08:40.776363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-820203_cac72664-7a51-4dc9-9465-f96fa6ed5e25!
	W0110 10:08:40.785900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:40.791407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:40.877138       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-820203_cac72664-7a51-4dc9-9465-f96fa6ed5e25!
	W0110 10:08:42.794639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:42.803038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:44.816803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:44.855565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429] <==
	I0110 10:07:52.624189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:08:22.626109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203: exit status 2 (458.273302ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-820203
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-820203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08",
	        "Created": "2026-01-10T10:06:35.311708414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 524323,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:07:39.594720586Z",
	            "FinishedAt": "2026-01-10T10:07:38.199540668Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/hostname",
	        "HostsPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/hosts",
	        "LogPath": "/var/lib/docker/containers/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08/72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08-json.log",
	        "Name": "/default-k8s-diff-port-820203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-820203:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-820203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "72463dca0fe31bebb995c0b1d1179e79a664d4cad11ee45856d699b0f8f7af08",
	                "LowerDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9d75926a95253a9d7da9983310a59efbc7d4bc990c61fbb511908e59014af274/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-820203",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-820203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-820203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-820203",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-820203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86e6f0e86181235e4a1bf355123725afac28bc06e9d9cc35a3a8619792c76785",
	            "SandboxKey": "/var/run/docker/netns/86e6f0e86181",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-820203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:74:25:5f:39:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e6955d7ca364871106ab81e8846bbb3fa5f63fcfbf0bbc67db73305008bd736d",
	                    "EndpointID": "98b01d2c205388360cc93f0b9dbdedfc339d3e0759ea9204c5aabc374df29485",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-820203",
	                        "72463dca0fe3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203: exit status 2 (410.054187ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-820203 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-820203 logs -n 25: (1.569780939s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:04 UTC │ 10 Jan 26 10:05 UTC │
	│ image   │ no-preload-964204 image list --format=json                                                                                                                                                                                                    │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ pause   │ -p no-preload-964204 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │                     │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                                                                                                  │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                                                                                               │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ stop    │ -p embed-certs-219333 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                                                                                                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984            │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ image   │ default-k8s-diff-port-820203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p default-k8s-diff-port-820203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:08:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:08:22.146423  527825 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:08:22.146576  527825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:22.146601  527825 out.go:374] Setting ErrFile to fd 2...
	I0110 10:08:22.146615  527825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:22.147060  527825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:08:22.147614  527825 out.go:368] Setting JSON to false
	I0110 10:08:22.149011  527825 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10252,"bootTime":1768029451,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:08:22.149344  527825 start.go:143] virtualization:  
	I0110 10:08:22.153036  527825 out.go:179] * [newest-cni-474984] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:08:22.157147  527825 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:08:22.157237  527825 notify.go:221] Checking for updates...
	I0110 10:08:22.161129  527825 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:08:22.164052  527825 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:08:22.166888  527825 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:08:22.169866  527825 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:08:22.172739  527825 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:08:22.176105  527825 config.go:182] Loaded profile config "default-k8s-diff-port-820203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:22.176229  527825 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:08:22.212299  527825 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:08:22.212446  527825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:22.276037  527825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:08:22.266250185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:22.276149  527825 docker.go:319] overlay module found
	I0110 10:08:22.279311  527825 out.go:179] * Using the docker driver based on user configuration
	I0110 10:08:22.282261  527825 start.go:309] selected driver: docker
	I0110 10:08:22.282281  527825 start.go:928] validating driver "docker" against <nil>
	I0110 10:08:22.282295  527825 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:08:22.283046  527825 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:22.338486  527825 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:08:22.329009505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:22.338655  527825 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 10:08:22.338685  527825 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 10:08:22.338915  527825 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:08:22.342015  527825 out.go:179] * Using Docker driver with root privileges
	I0110 10:08:22.344896  527825 cni.go:84] Creating CNI manager for ""
	I0110 10:08:22.344964  527825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:08:22.344978  527825 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:08:22.345061  527825 start.go:353] cluster config:
	{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:08:22.350041  527825 out.go:179] * Starting "newest-cni-474984" primary control-plane node in "newest-cni-474984" cluster
	I0110 10:08:22.352873  527825 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:08:22.355735  527825 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:08:22.358593  527825 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:08:22.358607  527825 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:08:22.358649  527825 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:08:22.358707  527825 cache.go:65] Caching tarball of preloaded images
	I0110 10:08:22.358794  527825 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:08:22.358806  527825 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:08:22.358950  527825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:08:22.358973  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json: {Name:mkcb69c8502d9a46ea5e77ecbcee5b08c7fc7f41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:22.383111  527825 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:08:22.383131  527825 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:08:22.383148  527825 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:08:22.383184  527825 start.go:360] acquireMachinesLock for newest-cni-474984: {Name:mk0515f3568da12603bdab21609a1a4ed360d8a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:08:22.383287  527825 start.go:364] duration metric: took 88.345µs to acquireMachinesLock for "newest-cni-474984"
	I0110 10:08:22.383311  527825 start.go:93] Provisioning new machine with config: &{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:08:22.383380  527825 start.go:125] createHost starting for "" (driver="docker")
	W0110 10:08:20.890293  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:22.890627  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	I0110 10:08:22.389162  527825 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:08:22.389396  527825 start.go:159] libmachine.API.Create for "newest-cni-474984" (driver="docker")
	I0110 10:08:22.389433  527825 client.go:173] LocalClient.Create starting
	I0110 10:08:22.389500  527825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:08:22.389538  527825 main.go:144] libmachine: Decoding PEM data...
	I0110 10:08:22.389555  527825 main.go:144] libmachine: Parsing certificate...
	I0110 10:08:22.389610  527825 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:08:22.389632  527825 main.go:144] libmachine: Decoding PEM data...
	I0110 10:08:22.389644  527825 main.go:144] libmachine: Parsing certificate...
	I0110 10:08:22.389996  527825 cli_runner.go:164] Run: docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:08:22.405471  527825 cli_runner.go:211] docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:08:22.405550  527825 network_create.go:284] running [docker network inspect newest-cni-474984] to gather additional debugging logs...
	I0110 10:08:22.405572  527825 cli_runner.go:164] Run: docker network inspect newest-cni-474984
	W0110 10:08:22.425978  527825 cli_runner.go:211] docker network inspect newest-cni-474984 returned with exit code 1
	I0110 10:08:22.426007  527825 network_create.go:287] error running [docker network inspect newest-cni-474984]: docker network inspect newest-cni-474984: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-474984 not found
	I0110 10:08:22.426027  527825 network_create.go:289] output of [docker network inspect newest-cni-474984]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-474984 not found
	
	** /stderr **
	I0110 10:08:22.426131  527825 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:08:22.443167  527825 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:08:22.443664  527825 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:08:22.443934  527825 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:08:22.444371  527825 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a33770}
	I0110 10:08:22.444397  527825 network_create.go:124] attempt to create docker network newest-cni-474984 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 10:08:22.444460  527825 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-474984 newest-cni-474984
	I0110 10:08:22.510396  527825 network_create.go:108] docker network newest-cni-474984 192.168.76.0/24 created
	I0110 10:08:22.510431  527825 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-474984" container
	I0110 10:08:22.510513  527825 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:08:22.526751  527825 cli_runner.go:164] Run: docker volume create newest-cni-474984 --label name.minikube.sigs.k8s.io=newest-cni-474984 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:08:22.544321  527825 oci.go:103] Successfully created a docker volume newest-cni-474984
	I0110 10:08:22.544405  527825 cli_runner.go:164] Run: docker run --rm --name newest-cni-474984-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-474984 --entrypoint /usr/bin/test -v newest-cni-474984:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:08:23.122707  527825 oci.go:107] Successfully prepared a docker volume newest-cni-474984
	I0110 10:08:23.122784  527825 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:08:23.122795  527825 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:08:23.122870  527825 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-474984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:08:27.014128  527825 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-474984:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.891194697s)
	I0110 10:08:27.014172  527825 kic.go:203] duration metric: took 3.891372642s to extract preloaded images to volume ...
	W0110 10:08:27.014321  527825 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:08:27.014437  527825 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:08:27.083657  527825 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-474984 --name newest-cni-474984 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-474984 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-474984 --network newest-cni-474984 --ip 192.168.76.2 --volume newest-cni-474984:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	W0110 10:08:25.391529  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	W0110 10:08:27.892932  524195 pod_ready.go:104] pod "coredns-7d764666f9-5kgtf" is not "Ready", error: <nil>
	I0110 10:08:28.388297  524195 pod_ready.go:94] pod "coredns-7d764666f9-5kgtf" is "Ready"
	I0110 10:08:28.388329  524195 pod_ready.go:86] duration metric: took 35.505048353s for pod "coredns-7d764666f9-5kgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.390856  524195 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.395101  524195 pod_ready.go:94] pod "etcd-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:28.395181  524195 pod_ready.go:86] duration metric: took 4.295822ms for pod "etcd-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.397263  524195 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.401405  524195 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:28.401436  524195 pod_ready.go:86] duration metric: took 4.144585ms for pod "kube-apiserver-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.403785  524195 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.586318  524195 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:28.586347  524195 pod_ready.go:86] duration metric: took 182.531757ms for pod "kube-controller-manager-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:28.786541  524195 pod_ready.go:83] waiting for pod "kube-proxy-h677z" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.187098  524195 pod_ready.go:94] pod "kube-proxy-h677z" is "Ready"
	I0110 10:08:29.187128  524195 pod_ready.go:86] duration metric: took 400.560823ms for pod "kube-proxy-h677z" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.387390  524195 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.786652  524195 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-820203" is "Ready"
	I0110 10:08:29.786680  524195 pod_ready.go:86] duration metric: took 399.263667ms for pod "kube-scheduler-default-k8s-diff-port-820203" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 10:08:29.786693  524195 pod_ready.go:40] duration metric: took 36.908123471s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 10:08:29.844610  524195 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:08:29.847572  524195 out.go:203] 
	W0110 10:08:29.850524  524195 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:08:29.853437  524195 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:08:29.856404  524195 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-820203" cluster and "default" namespace by default
	I0110 10:08:27.402837  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Running}}
	I0110 10:08:27.422099  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:27.446906  527825 cli_runner.go:164] Run: docker exec newest-cni-474984 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:08:27.522818  527825 oci.go:144] the created container "newest-cni-474984" has a running status.
	I0110 10:08:27.522855  527825 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa...
	I0110 10:08:27.993326  527825 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:08:28.021316  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:28.042660  527825 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:08:28.042686  527825 kic_runner.go:114] Args: [docker exec --privileged newest-cni-474984 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:08:28.084118  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:28.108302  527825 machine.go:94] provisionDockerMachine start ...
	I0110 10:08:28.108396  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:28.126638  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:28.126984  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:28.127000  527825 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:08:28.127601  527825 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49438->127.0.0.1:33459: read: connection reset by peer
	I0110 10:08:31.284642  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:08:31.284667  527825 ubuntu.go:182] provisioning hostname "newest-cni-474984"
	I0110 10:08:31.284759  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:31.304321  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:31.304698  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:31.304714  527825 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-474984 && echo "newest-cni-474984" | sudo tee /etc/hostname
	I0110 10:08:31.475094  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:08:31.475170  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:31.494093  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:31.494430  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:31.494455  527825 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-474984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-474984/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-474984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:08:31.644665  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:08:31.644693  527825 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:08:31.644719  527825 ubuntu.go:190] setting up certificates
	I0110 10:08:31.644729  527825 provision.go:84] configureAuth start
	I0110 10:08:31.644789  527825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:08:31.661786  527825 provision.go:143] copyHostCerts
	I0110 10:08:31.661858  527825 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:08:31.661872  527825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:08:31.661953  527825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:08:31.662056  527825 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:08:31.662068  527825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:08:31.662099  527825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:08:31.662186  527825 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:08:31.662197  527825 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:08:31.662226  527825 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:08:31.662295  527825 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.newest-cni-474984 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-474984]
	I0110 10:08:32.100875  527825 provision.go:177] copyRemoteCerts
	I0110 10:08:32.100973  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:08:32.101049  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.118733  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:32.220360  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:08:32.239658  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:08:32.259607  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:08:32.278441  527825 provision.go:87] duration metric: took 633.688159ms to configureAuth
	I0110 10:08:32.278472  527825 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:08:32.278673  527825 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:32.278785  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.298521  527825 main.go:144] libmachine: Using SSH client type: native
	I0110 10:08:32.298870  527825 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33459 <nil> <nil>}
	I0110 10:08:32.298884  527825 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:08:32.711560  527825 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:08:32.711589  527825 machine.go:97] duration metric: took 4.603262136s to provisionDockerMachine
	I0110 10:08:32.711600  527825 client.go:176] duration metric: took 10.322156275s to LocalClient.Create
	I0110 10:08:32.711619  527825 start.go:167] duration metric: took 10.32222313s to libmachine.API.Create "newest-cni-474984"
	I0110 10:08:32.711634  527825 start.go:293] postStartSetup for "newest-cni-474984" (driver="docker")
	I0110 10:08:32.711645  527825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:08:32.711708  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:08:32.711762  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.728367  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:32.832570  527825 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:08:32.838490  527825 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:08:32.838566  527825 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:08:32.838595  527825 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:08:32.838677  527825 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:08:32.838810  527825 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:08:32.838987  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:08:32.850938  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:08:32.870704  527825 start.go:296] duration metric: took 159.055637ms for postStartSetup
	I0110 10:08:32.871091  527825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:08:32.888132  527825 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:08:32.888406  527825 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:08:32.888456  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:32.904904  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:33.011613  527825 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:08:33.017363  527825 start.go:128] duration metric: took 10.633968529s to createHost
	I0110 10:08:33.017389  527825 start.go:83] releasing machines lock for "newest-cni-474984", held for 10.634094133s
	I0110 10:08:33.017473  527825 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:08:33.034568  527825 ssh_runner.go:195] Run: cat /version.json
	I0110 10:08:33.034619  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:33.034910  527825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:08:33.034965  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:33.058112  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:33.068762  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:33.274754  527825 ssh_runner.go:195] Run: systemctl --version
	I0110 10:08:33.281449  527825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:08:33.321146  527825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:08:33.325799  527825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:08:33.325869  527825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:08:33.358024  527825 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:08:33.358047  527825 start.go:496] detecting cgroup driver to use...
	I0110 10:08:33.358080  527825 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:08:33.358132  527825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:08:33.375752  527825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:08:33.389629  527825 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:08:33.389725  527825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:08:33.406725  527825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:08:33.425868  527825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:08:33.568578  527825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:08:33.696777  527825 docker.go:234] disabling docker service ...
	I0110 10:08:33.696874  527825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:08:33.718412  527825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:08:33.733486  527825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:08:33.855146  527825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:08:33.980055  527825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:08:33.993298  527825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:08:34.009234  527825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:08:34.009373  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.019335  527825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:08:34.019428  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.028686  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.038842  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.048981  527825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:08:34.058256  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.068269  527825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.083733  527825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:08:34.092589  527825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:08:34.100969  527825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:08:34.108488  527825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:08:34.230025  527825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:08:34.405632  527825 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:08:34.405743  527825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:08:34.410340  527825 start.go:574] Will wait 60s for crictl version
	I0110 10:08:34.410477  527825 ssh_runner.go:195] Run: which crictl
	I0110 10:08:34.414398  527825 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:08:34.439733  527825 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:08:34.439866  527825 ssh_runner.go:195] Run: crio --version
	I0110 10:08:34.476277  527825 ssh_runner.go:195] Run: crio --version
	I0110 10:08:34.511520  527825 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:08:34.514424  527825 cli_runner.go:164] Run: docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:08:34.534982  527825 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:08:34.539078  527825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:08:34.552098  527825 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 10:08:34.554885  527825 kubeadm.go:884] updating cluster {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:08:34.555014  527825 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:08:34.555094  527825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:08:34.592857  527825 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:08:34.592879  527825 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:08:34.592943  527825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:08:34.622915  527825 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:08:34.622936  527825 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:08:34.622946  527825 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:08:34.623034  527825 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-474984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:08:34.623126  527825 ssh_runner.go:195] Run: crio config
	I0110 10:08:34.686994  527825 cni.go:84] Creating CNI manager for ""
	I0110 10:08:34.687020  527825 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:08:34.687044  527825 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 10:08:34.687074  527825 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-474984 NodeName:newest-cni-474984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:08:34.687226  527825 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-474984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:08:34.687300  527825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:08:34.697113  527825 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:08:34.697237  527825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:08:34.704917  527825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:08:34.718899  527825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:08:34.732924  527825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 10:08:34.746833  527825 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:08:34.750525  527825 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:08:34.760214  527825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:08:34.871406  527825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:08:34.888727  527825 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984 for IP: 192.168.76.2
	I0110 10:08:34.888750  527825 certs.go:195] generating shared ca certs ...
	I0110 10:08:34.888768  527825 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:34.888913  527825 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:08:34.888965  527825 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:08:34.888973  527825 certs.go:257] generating profile certs ...
	I0110 10:08:34.889026  527825 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key
	I0110 10:08:34.889054  527825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.crt with IP's: []
	I0110 10:08:35.183183  527825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.crt ...
	I0110 10:08:35.183218  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.crt: {Name:mk7b4d0de44caf1237e6eb083960f54b622a5c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.183422  527825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key ...
	I0110 10:08:35.183435  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key: {Name:mk11245467da1bac33fac3d275cc47339df26572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.183545  527825 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993
	I0110 10:08:35.183562  527825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 10:08:35.347315  527825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993 ...
	I0110 10:08:35.347347  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993: {Name:mkdc93a78831f87e91e31eb8e9e04c917c9e3483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.347536  527825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993 ...
	I0110 10:08:35.347550  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993: {Name:mkb57143d5d90fd749c80c0a1bfd69e4d7a3e03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.347637  527825 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt.168eb993 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt
	I0110 10:08:35.347719  527825 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key
	I0110 10:08:35.347780  527825 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key
	I0110 10:08:35.347798  527825 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt with IP's: []
	I0110 10:08:35.744305  527825 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt ...
	I0110 10:08:35.744336  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt: {Name:mk3640570b1ee09e5fb7441e05560bb5a83443c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.744535  527825 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key ...
	I0110 10:08:35.744547  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key: {Name:mk29baa696dd389b8ae263ce69864e5c9f0229f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:35.744740  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:08:35.744788  527825 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:08:35.744802  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:08:35.744829  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:08:35.744861  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:08:35.744887  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:08:35.744935  527825 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:08:35.745557  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:08:35.764030  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:08:35.782903  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:08:35.800655  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:08:35.818657  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:08:35.836423  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 10:08:35.855885  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:08:35.874234  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 10:08:35.891845  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:08:35.908992  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:08:35.926625  527825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:08:35.945318  527825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:08:35.959774  527825 ssh_runner.go:195] Run: openssl version
	I0110 10:08:35.968153  527825 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:08:35.975768  527825 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:08:35.984166  527825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:08:35.987869  527825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:08:35.987935  527825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:08:36.029981  527825 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:08:36.038145  527825 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:08:36.046125  527825 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.053831  527825 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:08:36.061975  527825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.066038  527825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.066107  527825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:08:36.107424  527825 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:08:36.114876  527825 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:08:36.122557  527825 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.130426  527825 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:08:36.138300  527825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.142233  527825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.142337  527825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:08:36.183554  527825 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:08:36.191110  527825 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:08:36.198590  527825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:08:36.202613  527825 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:08:36.202716  527825 kubeadm.go:401] StartCluster: {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:08:36.202806  527825 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:08:36.202866  527825 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:08:36.232340  527825 cri.go:96] found id: ""
	I0110 10:08:36.232415  527825 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:08:36.240998  527825 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:08:36.249270  527825 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:08:36.249372  527825 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:08:36.257694  527825 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:08:36.257717  527825 kubeadm.go:158] found existing configuration files:
	
	I0110 10:08:36.257816  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:08:36.266300  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:08:36.266410  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:08:36.274127  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:08:36.281966  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:08:36.282038  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:08:36.291202  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:08:36.301411  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:08:36.301483  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:08:36.309209  527825 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:08:36.317001  527825 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:08:36.317092  527825 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:08:36.324478  527825 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:08:36.368847  527825 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:08:36.369218  527825 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:08:36.452920  527825 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:08:36.453059  527825 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:08:36.453127  527825 kubeadm.go:319] OS: Linux
	I0110 10:08:36.453212  527825 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:08:36.453294  527825 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:08:36.453369  527825 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:08:36.453453  527825 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:08:36.453526  527825 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:08:36.453609  527825 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:08:36.453679  527825 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:08:36.453758  527825 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:08:36.453831  527825 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:08:36.527310  527825 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:08:36.527485  527825 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:08:36.527598  527825 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:08:36.535513  527825 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:08:36.542258  527825 out.go:252]   - Generating certificates and keys ...
	I0110 10:08:36.542428  527825 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:08:36.542541  527825 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:08:36.771725  527825 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:08:37.215703  527825 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:08:37.310671  527825 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:08:37.506271  527825 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:08:37.810543  527825 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:08:37.810909  527825 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-474984] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:08:37.960977  527825 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:08:37.961182  527825 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-474984] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 10:08:38.216380  527825 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:08:38.641321  527825 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:08:38.717281  527825 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:08:38.717603  527825 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:08:38.860205  527825 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:08:39.146319  527825 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:08:39.804040  527825 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:08:39.995343  527825 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:08:40.396771  527825 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:08:40.397510  527825 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:08:40.400173  527825 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 10:08:40.405664  527825 out.go:252]   - Booting up control plane ...
	I0110 10:08:40.405837  527825 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 10:08:40.405953  527825 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 10:08:40.406041  527825 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 10:08:40.422368  527825 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 10:08:40.422910  527825 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 10:08:40.431913  527825 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 10:08:40.432783  527825 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 10:08:40.433039  527825 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 10:08:40.569264  527825 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 10:08:40.569433  527825 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 10:08:41.072799  527825 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.862247ms
	I0110 10:08:41.072962  527825 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 10:08:41.073082  527825 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 10:08:41.073210  527825 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 10:08:41.073320  527825 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 10:08:43.582314  527825 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.508871581s
	I0110 10:08:45.724713  527825 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.651830764s
	
	
	==> CRI-O <==
	Jan 10 10:08:23 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:23.315306981Z" level=info msg="Started container" PID=1697 containerID=6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795 description=kube-system/storage-provisioner/storage-provisioner id=343e7bc1-8e24-4fd9-adf9-8e18d45f123e name=/runtime.v1.RuntimeService/StartContainer sandboxID=26c60f23d70559a511b8eb11c300f04fb883fb39c7fa72e74f6981e63aa5c211
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.836210185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.836870528Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.842885846Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.843028723Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.847232976Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.84726729Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.851631823Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.851665242Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.85168916Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.859779217Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 10:08:32 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:32.859812784Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.060402295Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9125a270-70f5-4213-895d-524bf6e5b107 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.064649002Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=215cd002-5b37-4932-a6d5-1b20b3508ebf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.06604492Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper" id=ca3e696e-28c4-44c7-ac4b-99dab1f35992 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.06619939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.078586797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.079543583Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.099611823Z" level=info msg="Created container 6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper" id=ca3e696e-28c4-44c7-ac4b-99dab1f35992 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.105306592Z" level=info msg="Starting container: 6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203" id=733653aa-d3d7-43a1-885f-b41fd11c301a name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.109755892Z" level=info msg="Started container" PID=1778 containerID=6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper id=733653aa-d3d7-43a1-885f-b41fd11c301a name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f303a6ae6fe27e94e07ede400ef920df6bc1ee61306999684233ca067ceffe5
	Jan 10 10:08:38 default-k8s-diff-port-820203 conmon[1776]: conmon 6a489a7e9368f7ee0254 <ninfo>: container 1778 exited with status 1
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.298401445Z" level=info msg="Removing container: bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4" id=29309262-30e8-4ad8-909f-1a8e5fcecb05 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.30670546Z" level=info msg="Error loading conmon cgroup of container bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4: cgroup deleted" id=29309262-30e8-4ad8-909f-1a8e5fcecb05 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 10:08:38 default-k8s-diff-port-820203 crio[659]: time="2026-01-10T10:08:38.309641509Z" level=info msg="Removed container bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw/dashboard-metrics-scraper" id=29309262-30e8-4ad8-909f-1a8e5fcecb05 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	6a489a7e9368f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   8f303a6ae6fe2       dashboard-metrics-scraper-867fb5f87b-47kqw             kubernetes-dashboard
	6a5cc272c2a2c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   26c60f23d7055       storage-provisioner                                    kube-system
	c49d3ac474b56       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   b4f9b9b0b7e58       kubernetes-dashboard-b84665fb8-sd2l8                   kubernetes-dashboard
	248e711dd6b8d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           55 seconds ago       Running             coredns                     1                   dd8996126c524       coredns-7d764666f9-5kgtf                               kube-system
	a6169d1727208       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   b5336fc0e5737       busybox                                                default
	7fa9d7cc6055c       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           56 seconds ago       Running             kube-proxy                  1                   0aee706a394a0       kube-proxy-h677z                                       kube-system
	ddddfc6349877       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   26c60f23d7055       storage-provisioner                                    kube-system
	f1a26bb6b7a3f       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           56 seconds ago       Running             kindnet-cni                 1                   7156a00a95d26       kindnet-kg5mk                                          kube-system
	c8a0479b8f6a6       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   c583b17905c05       kube-controller-manager-default-k8s-diff-port-820203   kube-system
	9ca4c73ec1b58       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   03b1687e4066f       kube-scheduler-default-k8s-diff-port-820203            kube-system
	812d4c4e5e7a1       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   0325b8ea3f231       etcd-default-k8s-diff-port-820203                      kube-system
	91bbce93fe2f1       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   cff8092f62bf4       kube-apiserver-default-k8s-diff-port-820203            kube-system
	
	
	==> coredns [248e711dd6b8d17c98067727b2ed611fce6c5d304e26a6362e3527a9e6d612a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:38946 - 19880 "HINFO IN 6707206200389323888.944827970982932827. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012293465s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-820203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-820203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=default-k8s-diff-port-820203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_06_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:06:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-820203
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:08:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:06:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 10:08:32 +0000   Sat, 10 Jan 2026 10:07:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-820203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                3458310f-51b7-4cba-9b86-ae28b618509b
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-5kgtf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-default-k8s-diff-port-820203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-kg5mk                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-820203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-820203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-h677z                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-820203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-47kqw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-sd2l8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node default-k8s-diff-port-820203 event: Registered Node default-k8s-diff-port-820203 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node default-k8s-diff-port-820203 event: Registered Node default-k8s-diff-port-820203 in Controller
	
	
	==> dmesg <==
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	[Jan10 10:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [812d4c4e5e7a1276ec1e7959d0c233923c12f5bb2d443666556dcafaf0675d47] <==
	{"level":"info","ts":"2026-01-10T10:07:48.199602Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:07:48.199611Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T10:07:48.200017Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:48.200029Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T10:07:48.215394Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-10T10:07:48.215469Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:07:48.215530Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:07:48.908541Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:48.908677Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:48.908757Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T10:07:48.908824Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:48.908866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.912535Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.912601Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:07:48.912644Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.912680Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T10:07:48.914977Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-820203 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:07:48.915165Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:48.916150Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:48.918213Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T10:07:48.920532Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:07:48.923942Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:48.923974Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:07:48.930176Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:07:48.934636Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:08:48 up  2:51,  0 user,  load average: 3.74, 2.27, 2.09
	Linux default-k8s-diff-port-820203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f1a26bb6b7a3f3120f63aa290ec0bc44dd75c300ebd78d7f1e5f7235e903809a] <==
	I0110 10:07:52.639182       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:07:52.645394       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 10:07:52.645631       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:07:52.645944       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:07:52.645999       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:07:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:07:52.828311       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:07:52.828401       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:07:52.828435       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:07:52.829366       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 10:08:22.830836       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 10:08:22.830954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 10:08:22.831123       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 10:08:22.832407       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 10:08:24.129075       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 10:08:24.129111       1 metrics.go:72] Registering metrics
	I0110 10:08:24.129178       1 controller.go:711] "Syncing nftables rules"
	I0110 10:08:32.828664       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 10:08:32.829305       1 main.go:301] handling current node
	I0110 10:08:42.827950       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 10:08:42.827989       1 main.go:301] handling current node
	
	
	==> kube-apiserver [91bbce93fe2f1d6b5b03b3c5e68f84111900401f78fc9963cae132487b50afe9] <==
	I0110 10:07:51.247846       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 10:07:51.255956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 10:07:51.259923       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:07:51.262123       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:51.262160       1 policy_source.go:248] refreshing policies
	I0110 10:07:51.264123       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:07:51.267671       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 10:07:51.267727       1 aggregator.go:187] initial CRD sync complete...
	I0110 10:07:51.267754       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 10:07:51.267761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:07:51.267770       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:07:51.272213       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:07:51.283816       1 shared_informer.go:377] "Caches are synced"
	E0110 10:07:51.287100       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 10:07:51.801632       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:07:51.853108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:07:51.855036       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:07:51.898111       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:07:51.928222       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:07:51.972897       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:07:52.194852       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.85.105"}
	I0110 10:07:52.295490       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.158.241"}
	I0110 10:07:54.862924       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:07:54.968382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:07:55.027975       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c8a0479b8f6a642cfc7ee579d8f6e15d1bfbd67e0c4ce4d3617f92af0f46fdde] <==
	I0110 10:07:54.443836       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.443934       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.443992       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444047       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444384       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444920       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.444954       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445022       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-820203"
	I0110 10:07:54.445080       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445436       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445469       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.445664       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.450154       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.450390       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.450445       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451529       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 10:07:54.451629       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451672       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451742       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.451784       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.452487       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.523244       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.544755       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:54.544779       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:07:54.544785       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [7fa9d7cc6055cc2e1f6692c6ebf8145ae5267292a0c2ea1668696d165b3268f0] <==
	I0110 10:07:52.665628       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:07:52.802325       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:52.903463       1 shared_informer.go:377] "Caches are synced"
	I0110 10:07:52.903517       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 10:07:52.903598       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:07:52.944967       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:07:52.945025       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:07:52.949063       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:07:52.949372       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:07:52.949398       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:52.951147       1 config.go:200] "Starting service config controller"
	I0110 10:07:52.951174       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:07:52.951195       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:07:52.951199       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:07:52.951214       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:07:52.951225       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:07:52.951876       1 config.go:309] "Starting node config controller"
	I0110 10:07:52.951895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:07:52.951902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:07:53.052004       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:07:53.052004       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:07:53.052036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ca4c73ec1b58d19272d076cb1667350dee8e33e688aefff55b6ee374ff3ceb7] <==
	I0110 10:07:48.878715       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:07:51.176231       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:07:51.176260       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:07:51.176292       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:07:51.176300       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:07:51.266361       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:07:51.266389       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:07:51.270638       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:07:51.270769       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:07:51.270783       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:07:51.270805       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:07:51.371226       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:08:07 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:07.213464     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:07 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:07.213688     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:08 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:08.216877     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:08 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:08.216919     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:08 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:08.217070     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:16 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:16.512068     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:16 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:16.512112     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:17.238187     791 scope.go:122] "RemoveContainer" containerID="96878e8f3f2a6afdee8c3be9c93e7b0ca7abc8e89b1b74c4c6d9b686c5250e04"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:17.238465     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:17.238492     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:17 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:17.238638     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:23 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:23.254222     791 scope.go:122] "RemoveContainer" containerID="ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429"
	Jan 10 10:08:26 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:26.512041     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:26 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:26.512094     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:26 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:26.512257     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:27 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:27.901013     791 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5kgtf" containerName="coredns"
	Jan 10 10:08:38 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:38.059192     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:38 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:38.059671     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:38 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:38.296988     791 scope.go:122] "RemoveContainer" containerID="bbe6ed1b21087ad467c410b9f8cd38cacaffef4c6492993e207ad2e395bc72c4"
	Jan 10 10:08:39 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:39.301784     791 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" containerName="dashboard-metrics-scraper"
	Jan 10 10:08:39 default-k8s-diff-port-820203 kubelet[791]: I0110 10:08:39.302249     791 scope.go:122] "RemoveContainer" containerID="6a489a7e9368f7ee0254aae76ea59ec57564d1acc94730edfcf12f8329dab203"
	Jan 10 10:08:39 default-k8s-diff-port-820203 kubelet[791]: E0110 10:08:39.302524     791 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-47kqw_kubernetes-dashboard(ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-47kqw" podUID="ed7317b9-4d1f-4aa0-ba1f-93133a7c21c1"
	Jan 10 10:08:42 default-k8s-diff-port-820203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:08:42 default-k8s-diff-port-820203 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:08:42 default-k8s-diff-port-820203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c49d3ac474b567cf49fa11e878d888eeb4e94ff1b550c07a0dba4f375ccc7359] <==
	2026/01/10 10:08:01 Using namespace: kubernetes-dashboard
	2026/01/10 10:08:01 Using in-cluster config to connect to apiserver
	2026/01/10 10:08:01 Using secret token for csrf signing
	2026/01/10 10:08:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 10:08:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 10:08:01 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 10:08:01 Generating JWE encryption key
	2026/01/10 10:08:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 10:08:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 10:08:01 Initializing JWE encryption key from synchronized object
	2026/01/10 10:08:01 Creating in-cluster Sidecar client
	2026/01/10 10:08:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:08:01 Serving insecurely on HTTP port: 9090
	2026/01/10 10:08:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 10:08:01 Starting overwatch
	
	
	==> storage-provisioner [6a5cc272c2a2c409ffe00a31dc484d5849a8d0e69199c5120f23162d176be795] <==
	I0110 10:08:23.331684       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 10:08:23.365905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 10:08:23.365968       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 10:08:23.368435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:26.825049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:31.084921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:34.687334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:37.741504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:40.764683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:40.773142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:40.773590       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 10:08:40.773683       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e73b981a-6c80-4d85-b5f4-5190b80286fa", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-820203_cac72664-7a51-4dc9-9465-f96fa6ed5e25 became leader
	I0110 10:08:40.776363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-820203_cac72664-7a51-4dc9-9465-f96fa6ed5e25!
	W0110 10:08:40.785900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:40.791407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 10:08:40.877138       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-820203_cac72664-7a51-4dc9-9465-f96fa6ed5e25!
	W0110 10:08:42.794639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:42.803038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:44.816803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:44.855565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:46.866721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:46.879511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:48.884172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 10:08:48.891809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ddddfc63498775b48e0c47bd5b39459b83f233b5a5bc1ddba5d3384dc4b54429] <==
	I0110 10:07:52.624189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 10:08:22.626109       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203: exit status 2 (473.528816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-474984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-474984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (610.303935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:08:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-474984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-474984
helpers_test.go:244: (dbg) docker inspect newest-cni-474984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913",
	        "Created": "2026-01-10T10:08:27.104727193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 528233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:08:27.169823148Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/hosts",
	        "LogPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913-json.log",
	        "Name": "/newest-cni-474984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-474984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-474984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913",
	                "LowerDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-474984",
	                "Source": "/var/lib/docker/volumes/newest-cni-474984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-474984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-474984",
	                "name.minikube.sigs.k8s.io": "newest-cni-474984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4096877c4938025a54ccb6cc15ddd95abf427c0581de4f9bf991f974721cb77c",
	            "SandboxKey": "/var/run/docker/netns/4096877c4938",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-474984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:37:d5:d1:e1:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ee25b83b7ac31be71c67e0a8b1d8dc5dbbff09959508135e36bff53cdc9f623",
	                    "EndpointID": "2a52b17d453a774d3ca9e0b8bb513809ac78ec39e585e8ab7daf1fdb118892af",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-474984",
	                        "fe5cd02e55d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-474984 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-474984 logs -n 25: (1.6948683s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-964204                                                                                                                                                                                                                          │ no-preload-964204            │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:05 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:05 UTC │ 10 Jan 26 10:06 UTC │
	│ ssh     │ force-systemd-flag-524845 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p force-systemd-flag-524845                                                                                                                                                                                                                  │ force-systemd-flag-524845    │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ delete  │ -p disable-driver-mounts-757819                                                                                                                                                                                                               │ disable-driver-mounts-757819 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:06 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable metrics-server -p embed-certs-219333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │                     │
	│ stop    │ -p embed-certs-219333 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:06 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-820203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                                                                                                   │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333           │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984            │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ default-k8s-diff-port-820203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p default-k8s-diff-port-820203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-820203                                                                                                                                                                                                               │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p default-k8s-diff-port-820203                                                                                                                                                                                                               │ default-k8s-diff-port-820203 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p test-preload-dl-gcs-469953 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-469953   │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-474984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-474984            │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:08:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:08:53.710497  531421 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:08:53.710761  531421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:53.710788  531421 out.go:374] Setting ErrFile to fd 2...
	I0110 10:08:53.710810  531421 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:08:53.711125  531421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:08:53.711597  531421 out.go:368] Setting JSON to false
	I0110 10:08:53.712609  531421 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10283,"bootTime":1768029451,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:08:53.712696  531421 start.go:143] virtualization:  
	I0110 10:08:53.716372  531421 out.go:179] * [test-preload-dl-gcs-469953] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:08:53.720436  531421 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:08:53.720540  531421 notify.go:221] Checking for updates...
	I0110 10:08:53.726240  531421 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:08:53.729247  531421 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:08:53.732796  531421 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:08:53.735522  531421 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:08:53.738477  531421 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:08:53.742729  531421 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:53.742842  531421 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:08:53.804283  531421 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:08:53.804389  531421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:53.906234  531421 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 10:08:53.892605804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:53.906343  531421 docker.go:319] overlay module found
	I0110 10:08:53.909492  531421 out.go:179] * Using the docker driver based on user configuration
	I0110 10:08:52.300846  527825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:08:52.800989  527825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:08:53.302352  527825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:08:53.801636  527825 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 10:08:53.956763  527825 kubeadm.go:1114] duration metric: took 3.954665255s to wait for elevateKubeSystemPrivileges
	I0110 10:08:53.956793  527825 kubeadm.go:403] duration metric: took 17.754081882s to StartCluster
	I0110 10:08:53.956820  527825 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:53.956891  527825 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:08:53.957474  527825 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:08:53.957696  527825 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:08:53.957786  527825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 10:08:53.958275  527825 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:08:53.958317  527825 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:08:53.958378  527825 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-474984"
	I0110 10:08:53.958397  527825 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-474984"
	I0110 10:08:53.958418  527825 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:08:53.959015  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:53.960948  527825 addons.go:70] Setting default-storageclass=true in profile "newest-cni-474984"
	I0110 10:08:53.960972  527825 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-474984"
	I0110 10:08:53.961379  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:53.962004  527825 out.go:179] * Verifying Kubernetes components...
	I0110 10:08:53.980720  527825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:08:53.912224  531421 start.go:309] selected driver: docker
	I0110 10:08:53.912244  531421 start.go:928] validating driver "docker" against <nil>
	I0110 10:08:53.912365  531421 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:08:54.039660  531421 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 10:08:54.023606413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:08:54.039835  531421 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:08:54.040116  531421 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 10:08:54.040266  531421 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 10:08:54.045527  531421 out.go:179] * Using Docker driver with root privileges
	I0110 10:08:54.043207  527825 addons.go:239] Setting addon default-storageclass=true in "newest-cni-474984"
	I0110 10:08:54.043255  527825 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:08:54.043711  527825 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:08:54.045547  527825 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:08:54.048378  531421 cni.go:84] Creating CNI manager for ""
	I0110 10:08:54.048460  531421 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:08:54.048469  531421 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:08:54.048585  531421 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-469953 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.1 ClusterName:test-preload-dl-gcs-469953 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}

                                                
                                                
	I0110 10:08:54.051807  531421 out.go:179] * Starting "test-preload-dl-gcs-469953" primary control-plane node in "test-preload-dl-gcs-469953" cluster
	I0110 10:08:54.054921  531421 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:08:54.058996  531421 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:08:54.061924  531421 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.1 and runtime crio
	I0110 10:08:54.062121  531421 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:08:54.106595  531421 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0-rc.1/preloaded-images-k8s-v18-v1.34.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I0110 10:08:54.106618  531421 cache.go:65] Caching tarball of preloaded images
	I0110 10:08:54.106774  531421 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.1 and runtime crio
	I0110 10:08:54.112571  531421 out.go:179] * Downloading Kubernetes v1.34.0-rc.1 preload ...
	I0110 10:08:54.048433  527825 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:08:54.048455  527825 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:08:54.048608  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:54.094634  527825 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:08:54.094664  527825 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:08:54.094729  527825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:08:54.129982  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:54.132607  527825 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 10:08:54.157439  527825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33459 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:08:54.340200  527825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:08:54.447321  527825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:08:54.524843  527825 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:08:54.940912  527825 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 10:08:54.941895  527825 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:08:54.941954  527825 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0110 10:08:55.142077  527825 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "newest-cni-474984" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0110 10:08:55.142106  527825 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0110 10:08:55.742076  527825 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.217197462s)
	I0110 10:08:55.742390  527825 api_server.go:72] duration metric: took 1.784659655s to wait for apiserver process to appear ...
	I0110 10:08:55.742405  527825 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:08:55.742422  527825 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:08:55.745594  527825 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0110 10:08:55.749722  527825 addons.go:530] duration metric: took 1.791396504s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0110 10:08:55.760391  527825 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:08:55.766822  527825 api_server.go:141] control plane version: v1.35.0
	I0110 10:08:55.766851  527825 api_server.go:131] duration metric: took 24.438827ms to wait for apiserver health ...
	I0110 10:08:55.766861  527825 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:08:55.779238  527825 system_pods.go:59] 9 kube-system pods found
	I0110 10:08:55.779344  527825 system_pods.go:61] "coredns-7d764666f9-p8q4j" [a9749369-8007-4ae4-ae1f-59587fbc22a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:08:55.779377  527825 system_pods.go:61] "coredns-7d764666f9-xpfml" [eb84126e-280a-465e-8285-c77ea1e49de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:08:55.779403  527825 system_pods.go:61] "etcd-newest-cni-474984" [738613df-396f-4911-8345-f8011471a0b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:08:55.779428  527825 system_pods.go:61] "kindnet-92rlc" [f8e102eb-cf98-403c-9e68-b249d36ea4eb] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 10:08:55.779453  527825 system_pods.go:61] "kube-apiserver-newest-cni-474984" [c64c2fc1-0d92-4d38-a4ca-63d9439cffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:08:55.779481  527825 system_pods.go:61] "kube-controller-manager-newest-cni-474984" [55f26b47-a82c-4ade-9fad-9f806091d48a] Running
	I0110 10:08:55.779504  527825 system_pods.go:61] "kube-proxy-fpllw" [bc315022-efa7-4370-896c-36d094209e88] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 10:08:55.779526  527825 system_pods.go:61] "kube-scheduler-newest-cni-474984" [8e056967-7cc7-4079-80dd-f856af7e8343] Running
	I0110 10:08:55.783144  527825 system_pods.go:61] "storage-provisioner" [19c1c419-c666-41b9-94ed-e8e852e9f2e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:08:55.783175  527825 system_pods.go:74] duration metric: took 16.30684ms to wait for pod list to return data ...
	I0110 10:08:55.783202  527825 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:08:55.787700  527825 default_sa.go:45] found service account: "default"
	I0110 10:08:55.787729  527825 default_sa.go:55] duration metric: took 4.492083ms for default service account to be created ...
	I0110 10:08:55.787743  527825 kubeadm.go:587] duration metric: took 1.830013294s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:08:55.787759  527825 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:08:55.791318  527825 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:08:55.791357  527825 node_conditions.go:123] node cpu capacity is 2
	I0110 10:08:55.791370  527825 node_conditions.go:105] duration metric: took 3.605352ms to run NodePressure ...
	I0110 10:08:55.791383  527825 start.go:242] waiting for startup goroutines ...
	I0110 10:08:55.791390  527825 start.go:247] waiting for cluster config update ...
	I0110 10:08:55.791401  527825 start.go:256] writing updated cluster config ...
	I0110 10:08:55.791687  527825 ssh_runner.go:195] Run: rm -f paused
	I0110 10:08:55.998610  527825 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:08:56.001776  527825 out.go:203] 
	W0110 10:08:56.007015  527825 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:08:56.010503  527825 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:08:56.013115  527825 out.go:179] * Done! kubectl is now configured to use "newest-cni-474984" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 10:08:41 newest-cni-474984 crio[837]: time="2026-01-10T10:08:41.694642521Z" level=info msg="Created container 1cc8280cf3058a7024ab9c2e2c04dfd933b80304aa2f04adc3215a2b0c3b6de7: kube-system/kube-controller-manager-newest-cni-474984/kube-controller-manager" id=7e38f46c-9b81-400f-90b6-76f4b44d8fcd name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:41 newest-cni-474984 crio[837]: time="2026-01-10T10:08:41.695562278Z" level=info msg="Starting container: 1cc8280cf3058a7024ab9c2e2c04dfd933b80304aa2f04adc3215a2b0c3b6de7" id=75d23265-d8d1-4a90-a085-8e8f6b4054b8 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:08:41 newest-cni-474984 crio[837]: time="2026-01-10T10:08:41.717118097Z" level=info msg="Started container" PID=1251 containerID=1cc8280cf3058a7024ab9c2e2c04dfd933b80304aa2f04adc3215a2b0c3b6de7 description=kube-system/kube-controller-manager-newest-cni-474984/kube-controller-manager id=75d23265-d8d1-4a90-a085-8e8f6b4054b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cb940c08c55c59ae3aedaed9eeaba759413411afe7b964ec3f62bc534ee269a
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.371111202Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-fpllw/POD" id=4ca99bc3-4464-4b94-aaf6-da0f5760fec0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.371181725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.377330216Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4ca99bc3-4464-4b94-aaf6-da0f5760fec0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.37804742Z" level=info msg="Running pod sandbox: kube-system/kindnet-92rlc/POD" id=85569e7f-a4ee-45f5-b77f-afd186c8c225 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.378166026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.382825191Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=85569e7f-a4ee-45f5-b77f-afd186c8c225 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.387677915Z" level=info msg="Ran pod sandbox a165c895db0b64307ea7c71da3dce79de72785c6e2fe4a69c30f66732d6fa1e3 with infra container: kube-system/kube-proxy-fpllw/POD" id=4ca99bc3-4464-4b94-aaf6-da0f5760fec0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.38928272Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=cd75769a-16a6-4e03-908d-163993dae936 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.393074346Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=d8fd3028-a9ff-48a7-8e29-6c54457b9961 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.408927769Z" level=info msg="Creating container: kube-system/kube-proxy-fpllw/kube-proxy" id=468b6254-4560-432f-9d7b-9fe66bdae491 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.410353873Z" level=info msg="Ran pod sandbox a4516b138d2faef418e75e8770c4270773600ab5d5b2df020aa09477cc89e86a with infra container: kube-system/kindnet-92rlc/POD" id=85569e7f-a4ee-45f5-b77f-afd186c8c225 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.416419508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.423615953Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=0f2beef9-ab06-4e48-8ef0-03025a0055da name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.429956797Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=0f2beef9-ab06-4e48-8ef0-03025a0055da name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.430122492Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=0f2beef9-ab06-4e48-8ef0-03025a0055da name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.434531736Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=9813b15d-eb99-4f67-a8bd-b40408ba8c1f name=/runtime.v1.ImageService/PullImage
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.436392435Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.453200552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.453911241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.544335854Z" level=info msg="Created container 471608001a369d0b0173aa19e9555cafeca5771e9bd10f017cdeac79b01eb236: kube-system/kube-proxy-fpllw/kube-proxy" id=468b6254-4560-432f-9d7b-9fe66bdae491 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.550819976Z" level=info msg="Starting container: 471608001a369d0b0173aa19e9555cafeca5771e9bd10f017cdeac79b01eb236" id=5e2d91e7-ed0a-48de-9e4e-aa7843eb7338 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:08:55 newest-cni-474984 crio[837]: time="2026-01-10T10:08:55.564880827Z" level=info msg="Started container" PID=1473 containerID=471608001a369d0b0173aa19e9555cafeca5771e9bd10f017cdeac79b01eb236 description=kube-system/kube-proxy-fpllw/kube-proxy id=5e2d91e7-ed0a-48de-9e4e-aa7843eb7338 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a165c895db0b64307ea7c71da3dce79de72785c6e2fe4a69c30f66732d6fa1e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	471608001a369       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   3 seconds ago       Running             kube-proxy                0                   a165c895db0b6       kube-proxy-fpllw                            kube-system
	1cc8280cf3058       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   16 seconds ago      Running             kube-controller-manager   0                   7cb940c08c55c       kube-controller-manager-newest-cni-474984   kube-system
	5bf9da6c32de1       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   16 seconds ago      Running             etcd                      0                   031d6edd63ef9       etcd-newest-cni-474984                      kube-system
	cbd8ed9fcd7e3       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   16 seconds ago      Running             kube-scheduler            0                   f11b78bc31641       kube-scheduler-newest-cni-474984            kube-system
	940e30498f8c9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   17 seconds ago      Running             kube-apiserver            0                   81299d0fcb43c       kube-apiserver-newest-cni-474984            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-474984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-474984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=newest-cni-474984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_08_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:08:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-474984
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:08:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:08:49 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:08:49 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:08:49 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 10:08:49 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-474984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                ec551f0b-0c63-4d9f-9877-0a8f892afcb7
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-474984                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-92rlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-474984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-474984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-fpllw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-474984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-474984 event: Registered Node newest-cni-474984 in Controller
	
	
	==> dmesg <==
	[ +36.302701] overlayfs: idmapped layers are currently not supported
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	[Jan10 10:08] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5bf9da6c32de16d48105f3a75a0caa65731fe31fa827bbfa082f6bfc0ad788bc] <==
	{"level":"info","ts":"2026-01-10T10:08:41.921181Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:08:42.375294Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T10:08:42.375421Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T10:08:42.375504Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T10:08:42.375559Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:08:42.375613Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:08:42.393512Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:08:42.393638Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:08:42.393684Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T10:08:42.393732Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:08:42.404174Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:08:42.412794Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-474984 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:08:42.414052Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:08:42.414167Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:08:42.414224Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T10:08:42.414288Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T10:08:42.414408Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T10:08:42.414489Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:08:42.414613Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:08:42.426480Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:08:42.426604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:08:42.429089Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:08:42.457290Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:08:42.470887Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:08:42.473529Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:08:58 up  2:51,  0 user,  load average: 4.74, 2.53, 2.17
	Linux newest-cni-474984 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [940e30498f8c9e0f9e8a24b3b86172f6da5100b6af1e7a0fc70159f43007d532] <==
	I0110 10:08:45.808262       1 policy_source.go:248] refreshing policies
	I0110 10:08:45.861957       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:08:45.869855       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:08:45.870997       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	E0110 10:08:45.874678       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 10:08:45.897979       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:08:45.898121       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 10:08:45.992756       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:08:46.445520       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 10:08:46.454033       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 10:08:46.454220       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:08:47.508185       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:08:47.622098       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:08:47.757873       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 10:08:47.767959       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 10:08:47.769482       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:08:47.775175       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:08:48.657823       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:08:48.910087       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:08:48.946895       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 10:08:48.976815       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 10:08:54.334826       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:08:54.345346       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 10:08:54.410762       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:08:54.658128       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [1cc8280cf3058a7024ab9c2e2c04dfd933b80304aa2f04adc3215a2b0c3b6de7] <==
	I0110 10:08:53.611375       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.611999       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.612106       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.612136       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.623107       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.623239       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 10:08:53.623316       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-474984"
	I0110 10:08:53.623369       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 10:08:53.623401       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.623424       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.629592       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.630030       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.630072       1 range_allocator.go:177] "Sending events to api server"
	I0110 10:08:53.630098       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 10:08:53.630102       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:08:53.630107       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.630558       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.630603       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.630638       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.637410       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:08:53.645766       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-474984" podCIDRs=["10.42.0.0/24"]
	I0110 10:08:53.737538       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.810235       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:53.810265       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:08:53.810272       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [471608001a369d0b0173aa19e9555cafeca5771e9bd10f017cdeac79b01eb236] <==
	I0110 10:08:55.622758       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:08:55.842181       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:08:55.942898       1 shared_informer.go:377] "Caches are synced"
	I0110 10:08:55.942934       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:08:55.943031       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:08:56.361349       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:08:56.361399       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:08:56.365301       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:08:56.365604       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:08:56.365614       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:08:56.366922       1 config.go:200] "Starting service config controller"
	I0110 10:08:56.366932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:08:56.366949       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:08:56.366954       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:08:56.366972       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:08:56.366975       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:08:56.367637       1 config.go:309] "Starting node config controller"
	I0110 10:08:56.367645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:08:56.367651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:08:56.467472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:08:56.467503       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:08:56.467546       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cbd8ed9fcd7e333d47932b9244fedc6b683be69054550c345f7e6a1ba873ca0f] <==
	E0110 10:08:45.743411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 10:08:45.743858       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 10:08:45.744013       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 10:08:45.744072       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 10:08:45.744352       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 10:08:45.744410       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 10:08:45.744458       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 10:08:45.744554       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 10:08:45.744846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 10:08:45.749590       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 10:08:45.761121       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:08:45.761392       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 10:08:45.761279       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 10:08:46.614315       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 10:08:46.662797       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 10:08:46.798226       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 10:08:46.877142       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 10:08:46.918861       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 10:08:46.969676       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 10:08:46.970781       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 10:08:46.977826       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 10:08:46.994782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 10:08:47.010006       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 10:08:47.025072       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	I0110 10:08:49.980597       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:08:50 newest-cni-474984 kubelet[1296]: E0110 10:08:50.179045    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-474984" containerName="kube-apiserver"
	Jan 10 10:08:50 newest-cni-474984 kubelet[1296]: E0110 10:08:50.181037    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-474984" containerName="kube-controller-manager"
	Jan 10 10:08:50 newest-cni-474984 kubelet[1296]: I0110 10:08:50.182803    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-474984" podStartSLOduration=1.18278905 podStartE2EDuration="1.18278905s" podCreationTimestamp="2026-01-10 10:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:08:50.182582748 +0000 UTC m=+1.387188721" watchObservedRunningTime="2026-01-10 10:08:50.18278905 +0000 UTC m=+1.387395031"
	Jan 10 10:08:51 newest-cni-474984 kubelet[1296]: E0110 10:08:51.179559    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-474984" containerName="kube-apiserver"
	Jan 10 10:08:51 newest-cni-474984 kubelet[1296]: E0110 10:08:51.180210    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-474984" containerName="kube-scheduler"
	Jan 10 10:08:51 newest-cni-474984 kubelet[1296]: E0110 10:08:51.180380    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-474984" containerName="etcd"
	Jan 10 10:08:52 newest-cni-474984 kubelet[1296]: E0110 10:08:52.181221    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-474984" containerName="etcd"
	Jan 10 10:08:52 newest-cni-474984 kubelet[1296]: E0110 10:08:52.184816    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-474984" containerName="kube-apiserver"
	Jan 10 10:08:52 newest-cni-474984 kubelet[1296]: E0110 10:08:52.185044    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-474984" containerName="kube-scheduler"
	Jan 10 10:08:53 newest-cni-474984 kubelet[1296]: I0110 10:08:53.729549    1296 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 10:08:53 newest-cni-474984 kubelet[1296]: I0110 10:08:53.730438    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828633    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc315022-efa7-4370-896c-36d094209e88-lib-modules\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828690    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2v5\" (UniqueName: \"kubernetes.io/projected/f8e102eb-cf98-403c-9e68-b249d36ea4eb-kube-api-access-8t2v5\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828718    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc315022-efa7-4370-896c-36d094209e88-xtables-lock\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828833    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hh8\" (UniqueName: \"kubernetes.io/projected/bc315022-efa7-4370-896c-36d094209e88-kube-api-access-s9hh8\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828854    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-lib-modules\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828873    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-cni-cfg\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828889    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-xtables-lock\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:08:54 newest-cni-474984 kubelet[1296]: I0110 10:08:54.828964    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc315022-efa7-4370-896c-36d094209e88-kube-proxy\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:08:55 newest-cni-474984 kubelet[1296]: I0110 10:08:55.088559    1296 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 10:08:55 newest-cni-474984 kubelet[1296]: W0110 10:08:55.385603    1296 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/crio-a165c895db0b64307ea7c71da3dce79de72785c6e2fe4a69c30f66732d6fa1e3 WatchSource:0}: Error finding container a165c895db0b64307ea7c71da3dce79de72785c6e2fe4a69c30f66732d6fa1e3: Status 404 returned error can't find the container with id a165c895db0b64307ea7c71da3dce79de72785c6e2fe4a69c30f66732d6fa1e3
	Jan 10 10:08:55 newest-cni-474984 kubelet[1296]: W0110 10:08:55.415489    1296 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/crio-a4516b138d2faef418e75e8770c4270773600ab5d5b2df020aa09477cc89e86a WatchSource:0}: Error finding container a4516b138d2faef418e75e8770c4270773600ab5d5b2df020aa09477cc89e86a: Status 404 returned error can't find the container with id a4516b138d2faef418e75e8770c4270773600ab5d5b2df020aa09477cc89e86a
	Jan 10 10:08:56 newest-cni-474984 kubelet[1296]: E0110 10:08:56.575378    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-474984" containerName="kube-controller-manager"
	Jan 10 10:08:56 newest-cni-474984 kubelet[1296]: I0110 10:08:56.605734    1296 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-fpllw" podStartSLOduration=2.605720369 podStartE2EDuration="2.605720369s" podCreationTimestamp="2026-01-10 10:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 10:08:56.248718019 +0000 UTC m=+7.453323992" watchObservedRunningTime="2026-01-10 10:08:56.605720369 +0000 UTC m=+7.810326334"
	Jan 10 10:08:57 newest-cni-474984 kubelet[1296]: E0110 10:08:57.864806    1296 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-474984" containerName="kube-scheduler"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474984 -n newest-cni-474984
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-474984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner: exit status 1 (84.165612ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-p8q4j" not found
	Error from server (NotFound): pods "coredns-7d764666f9-xpfml" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-474984 --alsologtostderr -v=1
E0110 10:09:22.655409  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:22.660757  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:22.670998  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:22.691255  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:22.732172  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:22.812611  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:22.973522  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:23.294229  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:23.934867  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-474984 --alsologtostderr -v=1: exit status 80 (2.309644862s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-474984 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 10:09:21.816950  536528 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:09:21.817099  536528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:09:21.817106  536528 out.go:374] Setting ErrFile to fd 2...
	I0110 10:09:21.817111  536528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:09:21.817366  536528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:09:21.817608  536528 out.go:368] Setting JSON to false
	I0110 10:09:21.817635  536528 mustload.go:66] Loading cluster: newest-cni-474984
	I0110 10:09:21.818092  536528 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:21.818540  536528 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:21.840438  536528 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:21.840805  536528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:21.954302  536528 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2026-01-10 10:09:21.943691013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:21.954954  536528 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-474984 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 10:09:21.958567  536528 out.go:179] * Pausing node newest-cni-474984 ... 
	I0110 10:09:21.961635  536528 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:21.961977  536528 ssh_runner.go:195] Run: systemctl --version
	I0110 10:09:21.962034  536528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:21.986336  536528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:22.107724  536528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:09:22.140124  536528 pause.go:52] kubelet running: true
	I0110 10:09:22.140194  536528 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:09:22.511854  536528 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:09:22.511950  536528 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:09:22.614999  536528 cri.go:96] found id: "1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273"
	I0110 10:09:22.615035  536528 cri.go:96] found id: "d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f"
	I0110 10:09:22.615041  536528 cri.go:96] found id: "c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039"
	I0110 10:09:22.615045  536528 cri.go:96] found id: "b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609"
	I0110 10:09:22.615049  536528 cri.go:96] found id: "97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b"
	I0110 10:09:22.615053  536528 cri.go:96] found id: "42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39"
	I0110 10:09:22.615074  536528 cri.go:96] found id: ""
	I0110 10:09:22.615147  536528 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:09:22.634090  536528 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:09:22Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:09:22.803343  536528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:09:22.834069  536528 pause.go:52] kubelet running: false
	I0110 10:09:22.834200  536528 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:09:23.139897  536528 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:09:23.140031  536528 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:09:23.319015  536528 cri.go:96] found id: "1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273"
	I0110 10:09:23.319034  536528 cri.go:96] found id: "d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f"
	I0110 10:09:23.319038  536528 cri.go:96] found id: "c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039"
	I0110 10:09:23.319041  536528 cri.go:96] found id: "b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609"
	I0110 10:09:23.319045  536528 cri.go:96] found id: "97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b"
	I0110 10:09:23.319048  536528 cri.go:96] found id: "42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39"
	I0110 10:09:23.319052  536528 cri.go:96] found id: ""
	I0110 10:09:23.319111  536528 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:09:23.732627  536528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 10:09:23.747206  536528 pause.go:52] kubelet running: false
	I0110 10:09:23.747271  536528 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 10:09:23.939488  536528 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 10:09:23.939595  536528 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 10:09:24.032249  536528 cri.go:96] found id: "1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273"
	I0110 10:09:24.032280  536528 cri.go:96] found id: "d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f"
	I0110 10:09:24.032290  536528 cri.go:96] found id: "c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039"
	I0110 10:09:24.032295  536528 cri.go:96] found id: "b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609"
	I0110 10:09:24.032298  536528 cri.go:96] found id: "97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b"
	I0110 10:09:24.032302  536528 cri.go:96] found id: "42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39"
	I0110 10:09:24.032305  536528 cri.go:96] found id: ""
	I0110 10:09:24.032375  536528 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 10:09:24.047260  536528 out.go:203] 
	W0110 10:09:24.050340  536528 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:09:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:09:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 10:09:24.050422  536528 out.go:285] * 
	* 
	W0110 10:09:24.054982  536528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 10:09:24.058693  536528 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-474984 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-474984
helpers_test.go:244: (dbg) docker inspect newest-cni-474984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913",
	        "Created": "2026-01-10T10:08:27.104727193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 533158,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:09:04.853574773Z",
	            "FinishedAt": "2026-01-10T10:09:03.727518502Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/hosts",
	        "LogPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913-json.log",
	        "Name": "/newest-cni-474984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-474984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-474984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913",
	                "LowerDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-474984",
	                "Source": "/var/lib/docker/volumes/newest-cni-474984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-474984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-474984",
	                "name.minikube.sigs.k8s.io": "newest-cni-474984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4156f4b2d2793139ac39b681a29e3f89bbb47a23d709da6bbe33f37c59e6f0c4",
	            "SandboxKey": "/var/run/docker/netns/4156f4b2d279",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-474984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:6d:35:9b:36:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ee25b83b7ac31be71c67e0a8b1d8dc5dbbff09959508135e36bff53cdc9f623",
	                    "EndpointID": "f7db690a1900c9497e244dab5adebf1f3d438c67d7a2da9da2af67caba8774bb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-474984",
	                        "fe5cd02e55d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984: exit status 2 (437.390023ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-474984 logs -n 25
E0110 10:09:25.215078  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-474984 logs -n 25: (1.376249995s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                                                                                                   │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ default-k8s-diff-port-820203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p default-k8s-diff-port-820203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-820203                                                                                                                                                                                                               │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p default-k8s-diff-port-820203                                                                                                                                                                                                               │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p test-preload-dl-gcs-469953 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-469953        │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-474984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-469953                                                                                                                                                                                                                 │ test-preload-dl-gcs-469953        │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p test-preload-dl-github-586120 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-586120     │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ stop    │ -p newest-cni-474984 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:09 UTC │
	│ delete  │ -p test-preload-dl-github-586120                                                                                                                                                                                                              │ test-preload-dl-github-586120     │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-877054 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-877054 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-877054                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-877054 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-474984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ start   │ -p auto-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-255897                       │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │                     │
	│ image   │ newest-cni-474984 image list --format=json                                                                                                                                                                                                    │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ pause   │ -p newest-cni-474984 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:09:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:09:04.547472  532948 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:09:04.547700  532948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:09:04.547740  532948 out.go:374] Setting ErrFile to fd 2...
	I0110 10:09:04.547759  532948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:09:04.548190  532948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:09:04.548779  532948 out.go:368] Setting JSON to false
	I0110 10:09:04.549733  532948 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10294,"bootTime":1768029451,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:09:04.549853  532948 start.go:143] virtualization:  
	I0110 10:09:04.553222  532948 out.go:179] * [auto-255897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:09:04.556133  532948 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:09:04.557825  532948 notify.go:221] Checking for updates...
	I0110 10:09:04.561992  532948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:09:04.564876  532948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:09:04.567752  532948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:09:04.570587  532948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:09:04.573416  532948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:09:04.516844  532942 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:04.517417  532942 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:09:04.554603  532942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:09:04.554720  532942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.621070  532942 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 10:09:04.611923391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.621175  532942 docker.go:319] overlay module found
	I0110 10:09:04.624245  532942 out.go:179] * Using the docker driver based on existing profile
	I0110 10:09:04.577119  532948 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:04.577231  532948 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:09:04.641087  532948 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:09:04.641199  532948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.725004  532948 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 10:09:04.714485655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.725118  532948 docker.go:319] overlay module found
	I0110 10:09:04.628364  532942 start.go:309] selected driver: docker
	I0110 10:09:04.628395  532942 start.go:928] validating driver "docker" against &{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:04.628646  532942 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:09:04.629326  532942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.727049  532942 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 10:09:04.714485655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.727379  532942 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:09:04.727412  532942 cni.go:84] Creating CNI manager for ""
	I0110 10:09:04.727462  532942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:04.727504  532942 start.go:353] cluster config:
	{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:04.728129  532948 out.go:179] * Using the docker driver based on user configuration
	I0110 10:09:04.730762  532942 out.go:179] * Starting "newest-cni-474984" primary control-plane node in "newest-cni-474984" cluster
	I0110 10:09:04.733599  532942 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:09:04.736539  532942 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:09:04.739336  532942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:04.739376  532942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:09:04.739385  532942 cache.go:65] Caching tarball of preloaded images
	I0110 10:09:04.739480  532942 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:09:04.739496  532942 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:09:04.739614  532942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:09:04.739849  532942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:09:04.760878  532942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:09:04.760897  532942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:09:04.760912  532942 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:09:04.760945  532942 start.go:360] acquireMachinesLock for newest-cni-474984: {Name:mk0515f3568da12603bdab21609a1a4ed360d8a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:09:04.761000  532942 start.go:364] duration metric: took 37.875µs to acquireMachinesLock for "newest-cni-474984"
	I0110 10:09:04.761021  532942 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:09:04.761026  532942 fix.go:54] fixHost starting: 
	I0110 10:09:04.761395  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:04.791885  532942 fix.go:112] recreateIfNeeded on newest-cni-474984: state=Stopped err=<nil>
	W0110 10:09:04.791927  532942 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 10:09:04.730798  532948 start.go:309] selected driver: docker
	I0110 10:09:04.730813  532948 start.go:928] validating driver "docker" against <nil>
	I0110 10:09:04.730826  532948 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:09:04.731569  532948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.808050  532948 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2026-01-10 10:09:04.797472637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.808217  532948 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:09:04.808452  532948 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:09:04.811503  532948 out.go:179] * Using Docker driver with root privileges
	I0110 10:09:04.814387  532948 cni.go:84] Creating CNI manager for ""
	I0110 10:09:04.814461  532948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:04.814487  532948 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:09:04.814575  532948 start.go:353] cluster config:
	{Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I0110 10:09:04.817590  532948 out.go:179] * Starting "auto-255897" primary control-plane node in "auto-255897" cluster
	I0110 10:09:04.820465  532948 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:09:04.823551  532948 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:09:04.826288  532948 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:04.826340  532948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:09:04.826350  532948 cache.go:65] Caching tarball of preloaded images
	I0110 10:09:04.826438  532948 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:09:04.826448  532948 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:09:04.826562  532948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/config.json ...
	I0110 10:09:04.826581  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/config.json: {Name:mk940365c5b418bb0df963905068fbd0c77bad75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:04.826742  532948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:09:04.851683  532948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:09:04.851768  532948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:09:04.851829  532948 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:09:04.851908  532948 start.go:360] acquireMachinesLock for auto-255897: {Name:mka0fc2e0dc9378e55969c5f235dbf5b050f9220 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:09:04.852088  532948 start.go:364] duration metric: took 159.082µs to acquireMachinesLock for "auto-255897"
	I0110 10:09:04.852158  532948 start.go:93] Provisioning new machine with config: &{Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:09:04.852398  532948 start.go:125] createHost starting for "" (driver="docker")
	I0110 10:09:04.796251  532942 out.go:252] * Restarting existing docker container for "newest-cni-474984" ...
	I0110 10:09:04.796342  532942 cli_runner.go:164] Run: docker start newest-cni-474984
	I0110 10:09:05.187185  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:05.220965  532942 kic.go:430] container "newest-cni-474984" state is running.
	I0110 10:09:05.221349  532942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:09:05.254830  532942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:09:05.255052  532942 machine.go:94] provisionDockerMachine start ...
	I0110 10:09:05.255122  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:05.281755  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:05.282076  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:05.282085  532942 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:09:05.282898  532942 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36428->127.0.0.1:33464: read: connection reset by peer
	I0110 10:09:08.436403  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:09:08.436429  532942 ubuntu.go:182] provisioning hostname "newest-cni-474984"
	I0110 10:09:08.436543  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:08.455281  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:08.455588  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:08.455604  532942 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-474984 && echo "newest-cni-474984" | sudo tee /etc/hostname
	I0110 10:09:08.618526  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:09:08.618674  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:08.637662  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:08.637996  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:08.638017  532942 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-474984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-474984/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-474984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:09:08.877130  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:09:08.877162  532942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:09:08.877182  532942 ubuntu.go:190] setting up certificates
	I0110 10:09:08.877192  532942 provision.go:84] configureAuth start
	I0110 10:09:08.877255  532942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:09:08.903160  532942 provision.go:143] copyHostCerts
	I0110 10:09:08.903229  532942 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:09:08.903251  532942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:09:08.904553  532942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:09:08.904688  532942 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:09:08.904702  532942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:09:08.904735  532942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:09:08.904800  532942 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:09:08.904810  532942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:09:08.904842  532942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:09:08.904901  532942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.newest-cni-474984 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-474984]
	I0110 10:09:04.859799  532948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:09:04.860168  532948 start.go:159] libmachine.API.Create for "auto-255897" (driver="docker")
	I0110 10:09:04.860198  532948 client.go:173] LocalClient.Create starting
	I0110 10:09:04.860266  532948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:09:04.860296  532948 main.go:144] libmachine: Decoding PEM data...
	I0110 10:09:04.860311  532948 main.go:144] libmachine: Parsing certificate...
	I0110 10:09:04.860363  532948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:09:04.860387  532948 main.go:144] libmachine: Decoding PEM data...
	I0110 10:09:04.860398  532948 main.go:144] libmachine: Parsing certificate...
	I0110 10:09:04.860869  532948 cli_runner.go:164] Run: docker network inspect auto-255897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:09:04.876408  532948 cli_runner.go:211] docker network inspect auto-255897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:09:04.876554  532948 network_create.go:284] running [docker network inspect auto-255897] to gather additional debugging logs...
	I0110 10:09:04.876603  532948 cli_runner.go:164] Run: docker network inspect auto-255897
	W0110 10:09:04.897478  532948 cli_runner.go:211] docker network inspect auto-255897 returned with exit code 1
	I0110 10:09:04.897506  532948 network_create.go:287] error running [docker network inspect auto-255897]: docker network inspect auto-255897: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-255897 not found
	I0110 10:09:04.897520  532948 network_create.go:289] output of [docker network inspect auto-255897]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-255897 not found
	
	** /stderr **
	I0110 10:09:04.897619  532948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:09:04.936406  532948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:09:04.937056  532948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:09:04.937299  532948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:09:04.937633  532948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6ee25b83b7ac IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:4f:cc:15:87:5f} reservation:<nil>}
	I0110 10:09:04.938050  532948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2bf40}
	I0110 10:09:04.938068  532948 network_create.go:124] attempt to create docker network auto-255897 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 10:09:04.938131  532948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-255897 auto-255897
	I0110 10:09:05.025712  532948 network_create.go:108] docker network auto-255897 192.168.85.0/24 created
	I0110 10:09:05.025749  532948 kic.go:121] calculated static IP "192.168.85.2" for the "auto-255897" container
	I0110 10:09:05.025834  532948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:09:05.040949  532948 cli_runner.go:164] Run: docker volume create auto-255897 --label name.minikube.sigs.k8s.io=auto-255897 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:09:05.059134  532948 oci.go:103] Successfully created a docker volume auto-255897
	I0110 10:09:05.059225  532948 cli_runner.go:164] Run: docker run --rm --name auto-255897-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-255897 --entrypoint /usr/bin/test -v auto-255897:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:09:05.689028  532948 oci.go:107] Successfully prepared a docker volume auto-255897
	I0110 10:09:05.689088  532948 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:05.689097  532948 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:09:05.689176  532948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-255897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:09:08.703109  532948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-255897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.013894847s)
	I0110 10:09:08.703137  532948 kic.go:203] duration metric: took 3.014035492s to extract preloaded images to volume ...
	W0110 10:09:08.703256  532948 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:09:08.703356  532948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:09:08.814933  532948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-255897 --name auto-255897 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-255897 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-255897 --network auto-255897 --ip 192.168.85.2 --volume auto-255897:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 10:09:09.186280  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Running}}
	I0110 10:09:09.235249  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Status}}
	I0110 10:09:09.289926  532948 cli_runner.go:164] Run: docker exec auto-255897 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:09:09.359578  532948 oci.go:144] the created container "auto-255897" has a running status.
	I0110 10:09:09.359604  532948 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa...
	I0110 10:09:09.630594  532942 provision.go:177] copyRemoteCerts
	I0110 10:09:09.630678  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:09:09.630730  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:09.683060  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:09.858586  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:09:09.880936  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:09:09.909681  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:09:09.947621  532942 provision.go:87] duration metric: took 1.070407713s to configureAuth
	I0110 10:09:09.947652  532942 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:09:09.947865  532942 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:09.947973  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:09.968485  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:09.968815  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:09.968841  532942 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:09:10.314385  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:09:10.314408  532942 machine.go:97] duration metric: took 5.05934622s to provisionDockerMachine
	I0110 10:09:10.314422  532942 start.go:293] postStartSetup for "newest-cni-474984" (driver="docker")
	I0110 10:09:10.314432  532942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:09:10.314507  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:09:10.314556  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.332672  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.436558  532942 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:09:10.440101  532942 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:09:10.440129  532942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:09:10.440141  532942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:09:10.440199  532942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:09:10.440288  532942 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:09:10.440400  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:09:10.447987  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:10.466524  532942 start.go:296] duration metric: took 152.087245ms for postStartSetup
	I0110 10:09:10.466608  532942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:09:10.466654  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.483562  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.581440  532942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:09:10.586009  532942 fix.go:56] duration metric: took 5.824976176s for fixHost
	I0110 10:09:10.586036  532942 start.go:83] releasing machines lock for "newest-cni-474984", held for 5.825026416s
	I0110 10:09:10.586107  532942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:09:10.602607  532942 ssh_runner.go:195] Run: cat /version.json
	I0110 10:09:10.602668  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.602932  532942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:09:10.602985  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.623503  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.636593  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.733036  532942 ssh_runner.go:195] Run: systemctl --version
	I0110 10:09:10.877277  532942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:09:10.936292  532942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:09:10.942400  532942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:09:10.942553  532942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:09:10.960847  532942 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:09:10.960921  532942 start.go:496] detecting cgroup driver to use...
	I0110 10:09:10.960965  532942 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:09:10.961039  532942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:09:10.988249  532942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:09:11.004109  532942 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:09:11.004265  532942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:09:11.028749  532942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:09:11.050492  532942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:09:11.199753  532942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:09:11.334386  532942 docker.go:234] disabling docker service ...
	I0110 10:09:11.334462  532942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:09:11.350711  532942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:09:11.366743  532942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:09:11.513189  532942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:09:11.641476  532942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:09:11.655163  532942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:09:11.669089  532942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:09:11.669209  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.679428  532942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:09:11.679504  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.688314  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.697019  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.705847  532942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:09:11.713863  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.722950  532942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.731360  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.740559  532942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:09:11.748405  532942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:09:11.755992  532942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:11.873435  532942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:09:12.052964  532942 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:09:12.053088  532942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:09:12.057081  532942 start.go:574] Will wait 60s for crictl version
	I0110 10:09:12.057176  532942 ssh_runner.go:195] Run: which crictl
	I0110 10:09:12.060859  532942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:09:12.086258  532942 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:09:12.086343  532942 ssh_runner.go:195] Run: crio --version
	I0110 10:09:12.118872  532942 ssh_runner.go:195] Run: crio --version
	I0110 10:09:12.149631  532942 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:09:12.152545  532942 cli_runner.go:164] Run: docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:09:12.168480  532942 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:09:12.173014  532942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:12.187857  532942 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 10:09:12.190623  532942 kubeadm.go:884] updating cluster {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:09:12.190763  532942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:12.190832  532942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:12.229719  532942 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:12.229743  532942 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:09:12.229802  532942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:12.255437  532942 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:12.255463  532942 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:09:12.255471  532942 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:09:12.255621  532942 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-474984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:09:12.255763  532942 ssh_runner.go:195] Run: crio config
	I0110 10:09:12.306318  532942 cni.go:84] Creating CNI manager for ""
	I0110 10:09:12.306342  532942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:12.306362  532942 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 10:09:12.306390  532942 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-474984 NodeName:newest-cni-474984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:09:12.306524  532942 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-474984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:09:12.306606  532942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:09:12.314647  532942 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:09:12.314731  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:09:12.322789  532942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:09:12.335992  532942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:09:12.349007  532942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 10:09:12.361854  532942 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:09:12.365722  532942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:12.377072  532942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:12.498696  532942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:09:12.515087  532942 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984 for IP: 192.168.76.2
	I0110 10:09:12.515107  532942 certs.go:195] generating shared ca certs ...
	I0110 10:09:12.515122  532942 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:12.515291  532942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:09:12.515354  532942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:09:12.515367  532942 certs.go:257] generating profile certs ...
	I0110 10:09:12.515474  532942 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key
	I0110 10:09:12.515549  532942 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993
	I0110 10:09:12.515604  532942 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key
	I0110 10:09:12.515738  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:09:12.515787  532942 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:09:12.516075  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:09:12.516155  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:09:12.516195  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:09:12.516224  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:09:12.516292  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:12.517652  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:09:12.542076  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:09:12.562399  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:09:12.582763  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:09:12.607334  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:09:12.628931  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 10:09:12.649371  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:09:12.672184  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 10:09:12.698386  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:09:12.726594  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:09:12.745613  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:09:12.764207  532942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:09:12.777368  532942 ssh_runner.go:195] Run: openssl version
	I0110 10:09:12.783596  532942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.792445  532942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:09:12.800643  532942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.804595  532942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.804731  532942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.846586  532942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:09:12.854069  532942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.861658  532942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:09:12.869399  532942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.873499  532942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.873568  532942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.915511  532942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:09:12.923091  532942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.930401  532942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:09:12.938113  532942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.941943  532942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.942028  532942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.984426  532942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:09:12.992414  532942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:09:12.996464  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:09:13.044414  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:09:13.085785  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:09:13.131501  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:09:13.220473  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:09:13.305443  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:09:13.369895  532942 kubeadm.go:401] StartCluster: {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:13.369992  532942 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:09:13.370077  532942 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:09:13.537905  532942 cri.go:96] found id: "c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039"
	I0110 10:09:13.537924  532942 cri.go:96] found id: "b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609"
	I0110 10:09:13.537928  532942 cri.go:96] found id: "97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b"
	I0110 10:09:13.537932  532942 cri.go:96] found id: "42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39"
	I0110 10:09:13.537935  532942 cri.go:96] found id: ""
	I0110 10:09:13.537985  532942 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:09:13.550915  532942 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:09:13Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:09:13.550989  532942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:09:13.568155  532942 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:09:13.568176  532942 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:09:13.568228  532942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:09:13.599625  532942 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:09:13.600013  532942 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-474984" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:09:13.600097  532942 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-474984" cluster setting kubeconfig missing "newest-cni-474984" context setting]
	I0110 10:09:13.600353  532942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:13.601595  532942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:09:13.633294  532942 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:09:13.633328  532942 kubeadm.go:602] duration metric: took 65.146195ms to restartPrimaryControlPlane
	I0110 10:09:13.633338  532942 kubeadm.go:403] duration metric: took 263.455591ms to StartCluster
	I0110 10:09:13.633354  532942 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:13.633417  532942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:09:13.633999  532942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:13.634222  532942 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:09:13.634520  532942 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:13.634565  532942 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:09:13.634630  532942 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-474984"
	I0110 10:09:13.634647  532942 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-474984"
	W0110 10:09:13.634653  532942 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:09:13.634679  532942 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:13.635302  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.635786  532942 addons.go:70] Setting dashboard=true in profile "newest-cni-474984"
	I0110 10:09:13.635801  532942 addons.go:239] Setting addon dashboard=true in "newest-cni-474984"
	W0110 10:09:13.635807  532942 addons.go:248] addon dashboard should already be in state true
	I0110 10:09:13.635827  532942 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:13.636226  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.647053  532942 addons.go:70] Setting default-storageclass=true in profile "newest-cni-474984"
	I0110 10:09:13.647178  532942 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-474984"
	I0110 10:09:13.648054  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.660633  532942 out.go:179] * Verifying Kubernetes components...
	I0110 10:09:13.676734  532942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:13.717944  532942 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:09:13.722421  532942 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:09:13.722449  532942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:09:13.722515  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:13.740793  532942 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:09:13.745753  532942 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:09:13.748681  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:09:13.748706  532942 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:09:13.748781  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:13.761358  532942 addons.go:239] Setting addon default-storageclass=true in "newest-cni-474984"
	W0110 10:09:13.761379  532942 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:09:13.761402  532942 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:13.768948  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.812701  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:13.835610  532942 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:09:13.835642  532942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:09:13.835702  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:13.844854  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:13.924690  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:14.052617  532942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:09:14.115248  532942 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:09:14.115388  532942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:09:14.130392  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:09:14.130418  532942 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:09:14.167095  532942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:09:14.193455  532942 api_server.go:72] duration metric: took 559.198481ms to wait for apiserver process to appear ...
	I0110 10:09:14.193481  532942 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:09:14.193499  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:14.203113  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:09:14.203134  532942 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:09:14.246831  532942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:09:14.298904  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:09:14.298930  532942 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:09:14.412279  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:09:14.412310  532942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:09:09.835438  532948 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:09:09.856689  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Status}}
	I0110 10:09:09.876683  532948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:09:09.876702  532948 kic_runner.go:114] Args: [docker exec --privileged auto-255897 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:09:09.924217  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Status}}
	I0110 10:09:09.946429  532948 machine.go:94] provisionDockerMachine start ...
	I0110 10:09:09.946514  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:09.966053  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:09.966383  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:09.966392  532948 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:09:09.967002  532948 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52640->127.0.0.1:33469: read: connection reset by peer
	I0110 10:09:13.136619  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-255897
	
	I0110 10:09:13.136690  532948 ubuntu.go:182] provisioning hostname "auto-255897"
	I0110 10:09:13.136769  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:13.167847  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:13.168243  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:13.168257  532948 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-255897 && echo "auto-255897" | sudo tee /etc/hostname
	I0110 10:09:13.375002  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-255897
	
	I0110 10:09:13.375132  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:13.398520  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:13.398834  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:13.398849  532948 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-255897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-255897/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-255897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:09:13.572343  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:09:13.572403  532948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:09:13.572440  532948 ubuntu.go:190] setting up certificates
	I0110 10:09:13.572463  532948 provision.go:84] configureAuth start
	I0110 10:09:13.572566  532948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-255897
	I0110 10:09:13.594268  532948 provision.go:143] copyHostCerts
	I0110 10:09:13.594336  532948 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:09:13.594345  532948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:09:13.594410  532948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:09:13.594497  532948 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:09:13.594503  532948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:09:13.594529  532948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:09:13.594589  532948 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:09:13.594594  532948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:09:13.594617  532948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:09:13.594670  532948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.auto-255897 san=[127.0.0.1 192.168.85.2 auto-255897 localhost minikube]
	I0110 10:09:14.414295  532948 provision.go:177] copyRemoteCerts
	I0110 10:09:14.414364  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:09:14.414405  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:14.435906  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:14.545747  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:09:14.573322  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 10:09:14.598385  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:09:14.623628  532948 provision.go:87] duration metric: took 1.051107198s to configureAuth
	I0110 10:09:14.623700  532948 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:09:14.623938  532948 config.go:182] Loaded profile config "auto-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:14.624105  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:14.660740  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:14.661049  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:14.661062  532948 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:09:15.081545  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:09:15.081574  532948 machine.go:97] duration metric: took 5.135125852s to provisionDockerMachine
	I0110 10:09:15.081584  532948 client.go:176] duration metric: took 10.221379796s to LocalClient.Create
	I0110 10:09:15.081598  532948 start.go:167] duration metric: took 10.221432949s to libmachine.API.Create "auto-255897"
	I0110 10:09:15.081605  532948 start.go:293] postStartSetup for "auto-255897" (driver="docker")
	I0110 10:09:15.081659  532948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:09:15.081751  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:09:15.081810  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.110196  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.240237  532948 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:09:15.243560  532948 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:09:15.243598  532948 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:09:15.243610  532948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:09:15.243675  532948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:09:15.243755  532948 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:09:15.243859  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:09:15.260976  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:15.290492  532948 start.go:296] duration metric: took 208.870899ms for postStartSetup
	I0110 10:09:15.290886  532948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-255897
	I0110 10:09:15.323106  532948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/config.json ...
	I0110 10:09:15.323390  532948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:09:15.323446  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.354618  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.472952  532948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:09:15.478569  532948 start.go:128] duration metric: took 10.626151086s to createHost
	I0110 10:09:15.478590  532948 start.go:83] releasing machines lock for "auto-255897", held for 10.626490706s
	I0110 10:09:15.478661  532948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-255897
	I0110 10:09:15.506210  532948 ssh_runner.go:195] Run: cat /version.json
	I0110 10:09:15.506261  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.507408  532948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:09:15.507470  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.550326  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.560751  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.798184  532948 ssh_runner.go:195] Run: systemctl --version
	I0110 10:09:15.808719  532948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:09:15.872381  532948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:09:15.880551  532948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:09:15.880627  532948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:09:15.937769  532948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:09:15.937796  532948 start.go:496] detecting cgroup driver to use...
	I0110 10:09:15.937830  532948 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:09:15.937883  532948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:09:15.962624  532948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:09:15.981961  532948 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:09:15.982035  532948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:09:16.016610  532948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:09:16.041101  532948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:09:16.255460  532948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:09:16.429325  532948 docker.go:234] disabling docker service ...
	I0110 10:09:16.429414  532948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:09:16.459828  532948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:09:16.476787  532948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:09:16.655281  532948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:09:16.808258  532948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:09:16.832341  532948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:09:16.852094  532948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:09:16.852174  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.861806  532948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:09:16.861887  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.871817  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.881478  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.891076  532948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:09:16.899898  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.909462  532948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.927241  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.936902  532948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:09:16.945571  532948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:09:16.953860  532948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:17.110061  532948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:09:17.304198  532948 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:09:17.304280  532948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:09:17.311200  532948 start.go:574] Will wait 60s for crictl version
	I0110 10:09:17.311281  532948 ssh_runner.go:195] Run: which crictl
	I0110 10:09:17.314977  532948 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:09:17.347273  532948 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:09:17.347370  532948 ssh_runner.go:195] Run: crio --version
	I0110 10:09:17.383485  532948 ssh_runner.go:195] Run: crio --version
	I0110 10:09:17.423556  532948 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:09:14.562647  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:09:14.562673  532942 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:09:14.608888  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:09:14.608914  532942 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:09:14.653927  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:09:14.653963  532942 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:09:14.686874  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:09:14.686903  532942 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:09:14.719555  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:09:14.719582  532942 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:09:14.746155  532942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:09:17.850526  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 10:09:17.850563  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 10:09:17.850576  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:18.068660  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 10:09:18.068697  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 10:09:18.193913  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:18.355250  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 10:09:18.355284  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 10:09:18.693989  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:18.729597  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:18.729626  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:19.194246  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:19.249341  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:19.249367  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:17.426481  532948 cli_runner.go:164] Run: docker network inspect auto-255897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:09:17.447711  532948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 10:09:17.451467  532948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:17.461083  532948 kubeadm.go:884] updating cluster {Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:09:17.461200  532948 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:17.461255  532948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:17.536290  532948 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:17.536311  532948 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:09:17.536367  532948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:17.578824  532948 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:17.578850  532948 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:09:17.578859  532948 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 10:09:17.578946  532948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-255897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:09:17.579029  532948 ssh_runner.go:195] Run: crio config
	I0110 10:09:17.634338  532948 cni.go:84] Creating CNI manager for ""
	I0110 10:09:17.634365  532948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:17.634382  532948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:09:17.634414  532948 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-255897 NodeName:auto-255897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:09:17.634539  532948 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-255897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:09:17.634928  532948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:09:17.649591  532948 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:09:17.649666  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:09:17.660198  532948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0110 10:09:17.674869  532948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:09:17.689836  532948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0110 10:09:17.704972  532948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:09:17.709148  532948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:17.719779  532948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:17.933228  532948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:09:17.957455  532948 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897 for IP: 192.168.85.2
	I0110 10:09:17.957476  532948 certs.go:195] generating shared ca certs ...
	I0110 10:09:17.957492  532948 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:17.957657  532948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:09:17.957712  532948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:09:17.957725  532948 certs.go:257] generating profile certs ...
	I0110 10:09:17.957785  532948 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.key
	I0110 10:09:17.957802  532948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt with IP's: []
	I0110 10:09:18.422010  532948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt ...
	I0110 10:09:18.422043  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: {Name:mkbdd1e5d354af40e3def7e5120c2d0a5b35219f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.422255  532948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.key ...
	I0110 10:09:18.422268  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.key: {Name:mkb0ad2602398105e2bc139c934dcb89906ddcd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.422367  532948 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9
	I0110 10:09:18.422386  532948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 10:09:18.637073  532948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9 ...
	I0110 10:09:18.637143  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9: {Name:mka67f183f878d82012709b80daf4b0f5ba25843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.637353  532948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9 ...
	I0110 10:09:18.637389  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9: {Name:mkb0ac16fc9918ca61ed9685f7253669dbaec5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.637512  532948 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt
	I0110 10:09:18.637623  532948 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key
	I0110 10:09:18.637718  532948 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key
	I0110 10:09:18.637760  532948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt with IP's: []
	I0110 10:09:18.894825  532948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt ...
	I0110 10:09:18.894897  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt: {Name:mkc5dd1526f2949ddf8542d9e5cf276dda872257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.895110  532948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key ...
	I0110 10:09:18.895147  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key: {Name:mk817d6047745a2294b381151d796eda203486e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.895367  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:09:18.895441  532948 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:09:18.895468  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:09:18.895522  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:09:18.895574  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:09:18.895637  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:09:18.895711  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:18.896290  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:09:18.937602  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:09:18.965021  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:09:19.008720  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:09:19.051688  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0110 10:09:19.080925  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:09:19.116559  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:09:19.147017  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:09:19.177623  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:09:19.208646  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:09:19.240974  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:09:19.270955  532948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:09:19.290703  532948 ssh_runner.go:195] Run: openssl version
	I0110 10:09:19.296921  532948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.308663  532948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:09:19.322278  532948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.326988  532948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.327096  532948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.374458  532948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:09:19.385211  532948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:09:19.393379  532948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.403297  532948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:09:19.413717  532948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.419066  532948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.419172  532948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.472674  532948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:09:19.485529  532948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:09:19.495975  532948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.510650  532948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:09:19.519362  532948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.525233  532948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.525331  532948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.582591  532948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:09:19.591389  532948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:09:19.602584  532948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:09:19.610361  532948 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:09:19.610438  532948 kubeadm.go:401] StartCluster: {Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:19.610529  532948 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:09:19.610663  532948 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:09:19.694784  532948 cri.go:96] found id: ""
	I0110 10:09:19.694890  532948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:09:19.717667  532948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:09:19.744463  532948 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:09:19.744605  532948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:09:19.761060  532948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:09:19.761082  532948 kubeadm.go:158] found existing configuration files:
	
	I0110 10:09:19.761157  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:09:19.772394  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:09:19.772517  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:09:19.782911  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:09:19.800139  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:09:19.800240  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:09:19.814396  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:09:19.828775  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:09:19.828872  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:09:19.843028  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:09:19.854938  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:09:19.855036  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:09:19.866477  532948 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:09:19.925176  532948 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:09:19.927069  532948 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:09:20.066592  532948 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:09:20.066699  532948 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:09:20.066766  532948 kubeadm.go:319] OS: Linux
	I0110 10:09:20.066836  532948 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:09:20.066905  532948 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:09:20.066976  532948 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:09:20.067045  532948 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:09:20.067114  532948 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:09:20.067183  532948 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:09:20.067249  532948 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:09:20.067319  532948 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:09:20.067384  532948 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:09:20.180951  532948 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:09:20.181143  532948 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:09:20.181283  532948 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:09:20.197499  532948 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:09:19.693866  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:19.715210  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:19.715239  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:20.193979  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:20.237307  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:20.237338  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:20.623485  532942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.456355456s)
	I0110 10:09:20.623547  532942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.376695997s)
	I0110 10:09:20.623921  532942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.877723583s)
	I0110 10:09:20.627398  532942 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-474984 addons enable metrics-server
	
	I0110 10:09:20.644359  532942 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 10:09:20.647494  532942 addons.go:530] duration metric: took 7.012913651s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 10:09:20.693805  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:20.703971  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:09:20.705482  532942 api_server.go:141] control plane version: v1.35.0
	I0110 10:09:20.705550  532942 api_server.go:131] duration metric: took 6.51206271s to wait for apiserver health ...
	I0110 10:09:20.705581  532942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:09:20.709957  532942 system_pods.go:59] 9 kube-system pods found
	I0110 10:09:20.710033  532942 system_pods.go:61] "coredns-7d764666f9-p8q4j" [a9749369-8007-4ae4-ae1f-59587fbc22a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:09:20.710059  532942 system_pods.go:61] "coredns-7d764666f9-xpfml" [eb84126e-280a-465e-8285-c77ea1e49de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:09:20.710101  532942 system_pods.go:61] "etcd-newest-cni-474984" [738613df-396f-4911-8345-f8011471a0b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:09:20.710123  532942 system_pods.go:61] "kindnet-92rlc" [f8e102eb-cf98-403c-9e68-b249d36ea4eb] Running
	I0110 10:09:20.710147  532942 system_pods.go:61] "kube-apiserver-newest-cni-474984" [c64c2fc1-0d92-4d38-a4ca-63d9439cffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:09:20.710187  532942 system_pods.go:61] "kube-controller-manager-newest-cni-474984" [55f26b47-a82c-4ade-9fad-9f806091d48a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:09:20.710208  532942 system_pods.go:61] "kube-proxy-fpllw" [bc315022-efa7-4370-896c-36d094209e88] Running
	I0110 10:09:20.710231  532942 system_pods.go:61] "kube-scheduler-newest-cni-474984" [8e056967-7cc7-4079-80dd-f856af7e8343] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:09:20.710290  532942 system_pods.go:61] "storage-provisioner" [19c1c419-c666-41b9-94ed-e8e852e9f2e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:09:20.710311  532942 system_pods.go:74] duration metric: took 4.70985ms to wait for pod list to return data ...
	I0110 10:09:20.710335  532942 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:09:20.713207  532942 default_sa.go:45] found service account: "default"
	I0110 10:09:20.713257  532942 default_sa.go:55] duration metric: took 2.884695ms for default service account to be created ...
	I0110 10:09:20.713307  532942 kubeadm.go:587] duration metric: took 7.079046466s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:09:20.713352  532942 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:09:20.716313  532942 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:09:20.716369  532942 node_conditions.go:123] node cpu capacity is 2
	I0110 10:09:20.716413  532942 node_conditions.go:105] duration metric: took 3.037459ms to run NodePressure ...
	I0110 10:09:20.716446  532942 start.go:242] waiting for startup goroutines ...
	I0110 10:09:20.716488  532942 start.go:247] waiting for cluster config update ...
	I0110 10:09:20.716563  532942 start.go:256] writing updated cluster config ...
	I0110 10:09:20.716917  532942 ssh_runner.go:195] Run: rm -f paused
	I0110 10:09:20.806280  532942 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:09:20.810643  532942 out.go:203] 
	W0110 10:09:20.813586  532942 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:09:20.816483  532942 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:09:20.819550  532942 out.go:179] * Done! kubectl is now configured to use "newest-cni-474984" cluster and "default" namespace by default
	I0110 10:09:20.200900  532948 out.go:252]   - Generating certificates and keys ...
	I0110 10:09:20.201060  532948 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:09:20.201185  532948 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:09:20.318503  532948 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:09:20.526269  532948 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:09:20.744279  532948 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:09:20.925492  532948 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:09:21.341909  532948 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:09:21.342170  532948 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-255897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:09:21.969205  532948 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:09:21.969474  532948 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-255897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:09:22.710852  532948 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:09:23.839438  532948 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:09:24.179601  532948 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:09:24.179671  532948 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:09:24.332867  532948 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	
	
	==> CRI-O <==
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.009624106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.014786585Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-fpllw/POD" id=93ecc5ea-2ec6-4d08-be20-2255fa558b60 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.01490821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.055528001Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=244afb01-9dfc-47da-ade0-942d74d1fc4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.056891632Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=93ecc5ea-2ec6-4d08-be20-2255fa558b60 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.06824317Z" level=info msg="Ran pod sandbox 89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316 with infra container: kube-system/kube-proxy-fpllw/POD" id=93ecc5ea-2ec6-4d08-be20-2255fa558b60 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.069492299Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=d1dca9ec-fa58-44c4-9275-559a768cb5cf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.072436949Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=19c367e7-8d20-41ec-a358-0abe04eeb52c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.077960726Z" level=info msg="Creating container: kube-system/kube-proxy-fpllw/kube-proxy" id=80b218a7-df00-4bb0-bf4e-8a022efc3d56 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.078272817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.078318512Z" level=info msg="Ran pod sandbox d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf with infra container: kube-system/kindnet-92rlc/POD" id=244afb01-9dfc-47da-ade0-942d74d1fc4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.084676195Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=603871ff-7f0c-42d4-ac04-664459cb8a17 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.085663054Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=fa94138e-5650-4420-8dd9-1b1b714699e2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.086650462Z" level=info msg="Creating container: kube-system/kindnet-92rlc/kindnet-cni" id=38f2c7c9-9ee9-4b2b-9c3b-a97982c1f81a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.086773179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.094732623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.095291632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.10940469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.110053849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.190499557Z" level=info msg="Created container d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f: kube-system/kindnet-92rlc/kindnet-cni" id=38f2c7c9-9ee9-4b2b-9c3b-a97982c1f81a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.193790342Z" level=info msg="Starting container: d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f" id=d8761dc8-f576-485b-a244-f439ac07e464 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.197554851Z" level=info msg="Created container 1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273: kube-system/kube-proxy-fpllw/kube-proxy" id=80b218a7-df00-4bb0-bf4e-8a022efc3d56 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.201589917Z" level=info msg="Starting container: 1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273" id=0b84baf8-fe11-49c6-8c5f-b625decf0dee name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.207067417Z" level=info msg="Started container" PID=1072 containerID=d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f description=kube-system/kindnet-92rlc/kindnet-cni id=d8761dc8-f576-485b-a244-f439ac07e464 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.216188597Z" level=info msg="Started container" PID=1073 containerID=1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273 description=kube-system/kube-proxy-fpllw/kube-proxy id=0b84baf8-fe11-49c6-8c5f-b625decf0dee name=/runtime.v1.RuntimeService/StartContainer sandboxID=89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1b0eb1a1125bf       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   6 seconds ago       Running             kube-proxy                1                   89a56e3f98463       kube-proxy-fpllw                            kube-system
	d4cb6b35f756f       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   6 seconds ago       Running             kindnet-cni               1                   d41e7080a2dc5       kindnet-92rlc                               kube-system
	c04536f0d830e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   12 seconds ago      Running             kube-controller-manager   1                   fa59d0aa98998       kube-controller-manager-newest-cni-474984   kube-system
	b7bd726e240ea       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   12 seconds ago      Running             kube-scheduler            1                   5dea18f9a3373       kube-scheduler-newest-cni-474984            kube-system
	97be5a2a78c38       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   12 seconds ago      Running             kube-apiserver            1                   c68bc80e9c200       kube-apiserver-newest-cni-474984            kube-system
	42bb52a58dfd6       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   12 seconds ago      Running             etcd                      1                   edd86bc8e4995       etcd-newest-cni-474984                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-474984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-474984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=newest-cni-474984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_08_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:08:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-474984
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:09:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-474984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                ec551f0b-0c63-4d9f-9877-0a8f892afcb7
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-474984                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-92rlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-474984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-474984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-fpllw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-474984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  32s   node-controller  Node newest-cni-474984 event: Registered Node newest-cni-474984 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-474984 event: Registered Node newest-cni-474984 in Controller
	
	
	==> dmesg <==
	[Jan10 09:39] overlayfs: idmapped layers are currently not supported
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	[Jan10 10:08] overlayfs: idmapped layers are currently not supported
	[Jan10 10:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39] <==
	{"level":"info","ts":"2026-01-10T10:09:13.853940Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:09:13.853995Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:09:13.863333Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T10:09:13.863735Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T10:09:13.863761Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T10:09:13.863823Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:09:13.863834Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:09:14.389487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:09:14.390020Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:09:14.390101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:09:14.390114Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:09:14.390129Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.396591Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.396649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:09:14.396669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.396688Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.417284Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-474984 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:09:14.417338Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:09:14.417357Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:09:14.419714Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:09:14.423809Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:09:14.417525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:09:14.423878Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:09:14.458485Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:09:14.495440Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:09:25 up  2:51,  0 user,  load average: 6.66, 3.20, 2.41
	Linux newest-cni-474984 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f] <==
	I0110 10:09:19.409992       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:09:19.410203       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:09:19.410299       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:09:19.410319       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:09:19.410328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:09:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:09:19.549409       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:09:19.549427       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:09:19.549436       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:09:19.555111       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b] <==
	I0110 10:09:18.541304       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 10:09:18.546122       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 10:09:18.546141       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 10:09:18.556139       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:09:18.568124       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:18.568152       1 policy_source.go:248] refreshing policies
	I0110 10:09:18.568215       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 10:09:18.568249       1 aggregator.go:187] initial CRD sync complete...
	I0110 10:09:18.568255       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 10:09:18.568261       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:09:18.568266       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:09:18.578914       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 10:09:18.627596       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:09:18.711290       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:09:18.867289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:09:20.015423       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:09:20.217154       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:09:20.338250       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:09:20.382861       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:09:20.551529       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.100.76"}
	I0110 10:09:20.595363       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.116.79"}
	I0110 10:09:22.486786       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:09:22.785462       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:09:22.858200       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:09:22.913076       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039] <==
	I0110 10:09:22.312699       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312705       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312711       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312717       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312751       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312760       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312767       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312774       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.326931       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 10:09:22.327061       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-474984"
	I0110 10:09:22.327164       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 10:09:22.312780       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312795       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312801       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312807       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312821       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312827       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312857       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.342505       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.343738       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.369768       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:09:22.481059       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.515921       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.515954       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:09:22.515963       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273] <==
	I0110 10:09:20.000195       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:09:20.239350       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:09:20.440298       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:20.440339       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:09:20.440421       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:09:20.656323       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:09:20.656381       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:09:20.665098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:09:20.665483       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:09:20.665728       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:09:20.667035       1 config.go:200] "Starting service config controller"
	I0110 10:09:20.667101       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:09:20.667194       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:09:20.667230       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:09:20.667270       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:09:20.667304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:09:20.668105       1 config.go:309] "Starting node config controller"
	I0110 10:09:20.673182       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:09:20.673280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:09:20.767909       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:09:20.768008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:09:20.768021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609] <==
	I0110 10:09:15.220341       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:09:18.073189       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:09:18.073226       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:09:18.073235       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:09:18.073242       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:09:18.345732       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:09:18.345762       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:09:18.395202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:09:18.412584       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:09:18.400773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:09:18.400798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:09:18.626650       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.716978     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-474984" containerName="etcd"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.720960     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-474984" containerName="kube-apiserver"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.721066     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-474984" containerName="kube-controller-manager"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.748863     732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.749132     732 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-474984\" already exists" pod="kube-system/kube-controller-manager-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.749153     732 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.751914     732 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.752009     732 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.752036     732 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.757176     732 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.808079     732 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-474984\" already exists" pod="kube-system/kube-scheduler-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.808112     732 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849351     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-cni-cfg\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849398     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-lib-modules\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849422     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc315022-efa7-4370-896c-36d094209e88-lib-modules\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849454     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc315022-efa7-4370-896c-36d094209e88-xtables-lock\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849481     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-xtables-lock\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.868381     732 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-474984\" already exists" pod="kube-system/etcd-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.922165     732 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 10:09:19 newest-cni-474984 kubelet[732]: W0110 10:09:19.063933     732 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/crio-d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf WatchSource:0}: Error finding container d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf: Status 404 returned error can't find the container with id d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf
	Jan 10 10:09:19 newest-cni-474984 kubelet[732]: W0110 10:09:19.066695     732 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/crio-89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316 WatchSource:0}: Error finding container 89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316: Status 404 returned error can't find the container with id 89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316
	Jan 10 10:09:22 newest-cni-474984 kubelet[732]: E0110 10:09:22.340879     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-474984" containerName="kube-controller-manager"
	Jan 10 10:09:22 newest-cni-474984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:09:22 newest-cni-474984 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:09:22 newest-cni-474984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474984 -n newest-cni-474984
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474984 -n newest-cni-474984: exit status 2 (474.163343ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-474984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc: exit status 1 (104.447877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-p8q4j" not found
	Error from server (NotFound): pods "coredns-7d764666f9-xpfml" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6rrxz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-d5gvc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-474984
helpers_test.go:244: (dbg) docker inspect newest-cni-474984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913",
	        "Created": "2026-01-10T10:08:27.104727193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 533158,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T10:09:04.853574773Z",
	            "FinishedAt": "2026-01-10T10:09:03.727518502Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/hosts",
	        "LogPath": "/var/lib/docker/containers/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913-json.log",
	        "Name": "/newest-cni-474984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-474984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-474984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913",
	                "LowerDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0-init/diff:/var/lib/docker/overlay2/99523328b98fa14cfd5448db3de131a4f5857f13df45c310ba7ca179ce321fb8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc2d0ce7f157a7ab7d583e54d2e7e9324ed1327324ae366b3618deedb53ca5b0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-474984",
	                "Source": "/var/lib/docker/volumes/newest-cni-474984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-474984",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-474984",
	                "name.minikube.sigs.k8s.io": "newest-cni-474984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4156f4b2d2793139ac39b681a29e3f89bbb47a23d709da6bbe33f37c59e6f0c4",
	            "SandboxKey": "/var/run/docker/netns/4156f4b2d279",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-474984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:6d:35:9b:36:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6ee25b83b7ac31be71c67e0a8b1d8dc5dbbff09959508135e36bff53cdc9f623",
	                    "EndpointID": "f7db690a1900c9497e244dab5adebf1f3d438c67d7a2da9da2af67caba8774bb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-474984",
	                        "fe5cd02e55d3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984: exit status 2 (458.498249ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-474984 logs -n 25
E0110 10:09:27.775866  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-474984 logs -n 25: (1.531952319s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ stop    │ -p default-k8s-diff-port-820203 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:07 UTC │
	│ start   │ -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:07 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ embed-certs-219333 image list --format=json                                                                                                                                                                                                   │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p embed-certs-219333 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p embed-certs-219333                                                                                                                                                                                                                         │ embed-certs-219333                │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ image   │ default-k8s-diff-port-820203 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ pause   │ -p default-k8s-diff-port-820203 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-820203                                                                                                                                                                                                               │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ delete  │ -p default-k8s-diff-port-820203                                                                                                                                                                                                               │ default-k8s-diff-port-820203      │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p test-preload-dl-gcs-469953 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-469953        │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-474984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-469953                                                                                                                                                                                                                 │ test-preload-dl-gcs-469953        │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:08 UTC │
	│ start   │ -p test-preload-dl-github-586120 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-586120     │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │                     │
	│ stop    │ -p newest-cni-474984 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:08 UTC │ 10 Jan 26 10:09 UTC │
	│ delete  │ -p test-preload-dl-github-586120                                                                                                                                                                                                              │ test-preload-dl-github-586120     │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-877054 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-877054 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-877054                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-877054 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-474984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ start   │ -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ start   │ -p auto-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-255897                       │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │                     │
	│ image   │ newest-cni-474984 image list --format=json                                                                                                                                                                                                    │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │ 10 Jan 26 10:09 UTC │
	│ pause   │ -p newest-cni-474984 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-474984                 │ jenkins │ v1.37.0 │ 10 Jan 26 10:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 10:09:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 10:09:04.547472  532948 out.go:360] Setting OutFile to fd 1 ...
	I0110 10:09:04.547700  532948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:09:04.547740  532948 out.go:374] Setting ErrFile to fd 2...
	I0110 10:09:04.547759  532948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 10:09:04.548190  532948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 10:09:04.548779  532948 out.go:368] Setting JSON to false
	I0110 10:09:04.549733  532948 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10294,"bootTime":1768029451,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 10:09:04.549853  532948 start.go:143] virtualization:  
	I0110 10:09:04.553222  532948 out.go:179] * [auto-255897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 10:09:04.556133  532948 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 10:09:04.557825  532948 notify.go:221] Checking for updates...
	I0110 10:09:04.561992  532948 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 10:09:04.564876  532948 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:09:04.567752  532948 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 10:09:04.570587  532948 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 10:09:04.573416  532948 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 10:09:04.516844  532942 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:04.517417  532942 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:09:04.554603  532942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:09:04.554720  532942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.621070  532942 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 10:09:04.611923391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.621175  532942 docker.go:319] overlay module found
	I0110 10:09:04.624245  532942 out.go:179] * Using the docker driver based on existing profile
	I0110 10:09:04.577119  532948 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:04.577231  532948 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 10:09:04.641087  532948 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 10:09:04.641199  532948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.725004  532948 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 10:09:04.714485655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.725118  532948 docker.go:319] overlay module found
	I0110 10:09:04.628364  532942 start.go:309] selected driver: docker
	I0110 10:09:04.628395  532942 start.go:928] validating driver "docker" against &{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:04.628646  532942 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:09:04.629326  532942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.727049  532942 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2026-01-10 10:09:04.714485655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.727379  532942 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:09:04.727412  532942 cni.go:84] Creating CNI manager for ""
	I0110 10:09:04.727462  532942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:04.727504  532942 start.go:353] cluster config:
	{Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:04.728129  532948 out.go:179] * Using the docker driver based on user configuration
	I0110 10:09:04.730762  532942 out.go:179] * Starting "newest-cni-474984" primary control-plane node in "newest-cni-474984" cluster
	I0110 10:09:04.733599  532942 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:09:04.736539  532942 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:09:04.739336  532942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:04.739376  532942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:09:04.739385  532942 cache.go:65] Caching tarball of preloaded images
	I0110 10:09:04.739480  532942 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:09:04.739496  532942 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:09:04.739614  532942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:09:04.739849  532942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:09:04.760878  532942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:09:04.760897  532942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:09:04.760912  532942 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:09:04.760945  532942 start.go:360] acquireMachinesLock for newest-cni-474984: {Name:mk0515f3568da12603bdab21609a1a4ed360d8a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:09:04.761000  532942 start.go:364] duration metric: took 37.875µs to acquireMachinesLock for "newest-cni-474984"
	I0110 10:09:04.761021  532942 start.go:96] Skipping create...Using existing machine configuration
	I0110 10:09:04.761026  532942 fix.go:54] fixHost starting: 
	I0110 10:09:04.761395  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:04.791885  532942 fix.go:112] recreateIfNeeded on newest-cni-474984: state=Stopped err=<nil>
	W0110 10:09:04.791927  532942 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 10:09:04.730798  532948 start.go:309] selected driver: docker
	I0110 10:09:04.730813  532948 start.go:928] validating driver "docker" against <nil>
	I0110 10:09:04.730826  532948 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 10:09:04.731569  532948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 10:09:04.808050  532948 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:43 SystemTime:2026-01-10 10:09:04.797472637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 10:09:04.808217  532948 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 10:09:04.808452  532948 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 10:09:04.811503  532948 out.go:179] * Using Docker driver with root privileges
	I0110 10:09:04.814387  532948 cni.go:84] Creating CNI manager for ""
	I0110 10:09:04.814461  532948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:04.814487  532948 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 10:09:04.814575  532948 start.go:353] cluster config:
	{Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I0110 10:09:04.817590  532948 out.go:179] * Starting "auto-255897" primary control-plane node in "auto-255897" cluster
	I0110 10:09:04.820465  532948 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 10:09:04.823551  532948 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 10:09:04.826288  532948 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:04.826340  532948 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 10:09:04.826350  532948 cache.go:65] Caching tarball of preloaded images
	I0110 10:09:04.826438  532948 preload.go:251] Found /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 10:09:04.826448  532948 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 10:09:04.826562  532948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/config.json ...
	I0110 10:09:04.826581  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/config.json: {Name:mk940365c5b418bb0df963905068fbd0c77bad75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:04.826742  532948 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 10:09:04.851683  532948 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 10:09:04.851768  532948 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 10:09:04.851829  532948 cache.go:243] Successfully downloaded all kic artifacts
	I0110 10:09:04.851908  532948 start.go:360] acquireMachinesLock for auto-255897: {Name:mka0fc2e0dc9378e55969c5f235dbf5b050f9220 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 10:09:04.852088  532948 start.go:364] duration metric: took 159.082µs to acquireMachinesLock for "auto-255897"
	I0110 10:09:04.852158  532948 start.go:93] Provisioning new machine with config: &{Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:09:04.852398  532948 start.go:125] createHost starting for "" (driver="docker")
	I0110 10:09:04.796251  532942 out.go:252] * Restarting existing docker container for "newest-cni-474984" ...
	I0110 10:09:04.796342  532942 cli_runner.go:164] Run: docker start newest-cni-474984
	I0110 10:09:05.187185  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:05.220965  532942 kic.go:430] container "newest-cni-474984" state is running.
	I0110 10:09:05.221349  532942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:09:05.254830  532942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/config.json ...
	I0110 10:09:05.255052  532942 machine.go:94] provisionDockerMachine start ...
	I0110 10:09:05.255122  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:05.281755  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:05.282076  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:05.282085  532942 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:09:05.282898  532942 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36428->127.0.0.1:33464: read: connection reset by peer
	I0110 10:09:08.436403  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:09:08.436429  532942 ubuntu.go:182] provisioning hostname "newest-cni-474984"
	I0110 10:09:08.436543  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:08.455281  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:08.455588  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:08.455604  532942 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-474984 && echo "newest-cni-474984" | sudo tee /etc/hostname
	I0110 10:09:08.618526  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-474984
	
	I0110 10:09:08.618674  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:08.637662  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:08.637996  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:08.638017  532942 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-474984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-474984/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-474984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:09:08.877130  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:09:08.877162  532942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:09:08.877182  532942 ubuntu.go:190] setting up certificates
	I0110 10:09:08.877192  532942 provision.go:84] configureAuth start
	I0110 10:09:08.877255  532942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:09:08.903160  532942 provision.go:143] copyHostCerts
	I0110 10:09:08.903229  532942 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:09:08.903251  532942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:09:08.904553  532942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:09:08.904688  532942 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:09:08.904702  532942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:09:08.904735  532942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:09:08.904800  532942 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:09:08.904810  532942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:09:08.904842  532942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:09:08.904901  532942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.newest-cni-474984 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-474984]
	I0110 10:09:04.859799  532948 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 10:09:04.860168  532948 start.go:159] libmachine.API.Create for "auto-255897" (driver="docker")
	I0110 10:09:04.860198  532948 client.go:173] LocalClient.Create starting
	I0110 10:09:04.860266  532948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem
	I0110 10:09:04.860296  532948 main.go:144] libmachine: Decoding PEM data...
	I0110 10:09:04.860311  532948 main.go:144] libmachine: Parsing certificate...
	I0110 10:09:04.860363  532948 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem
	I0110 10:09:04.860387  532948 main.go:144] libmachine: Decoding PEM data...
	I0110 10:09:04.860398  532948 main.go:144] libmachine: Parsing certificate...
	I0110 10:09:04.860869  532948 cli_runner.go:164] Run: docker network inspect auto-255897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 10:09:04.876408  532948 cli_runner.go:211] docker network inspect auto-255897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 10:09:04.876554  532948 network_create.go:284] running [docker network inspect auto-255897] to gather additional debugging logs...
	I0110 10:09:04.876603  532948 cli_runner.go:164] Run: docker network inspect auto-255897
	W0110 10:09:04.897478  532948 cli_runner.go:211] docker network inspect auto-255897 returned with exit code 1
	I0110 10:09:04.897506  532948 network_create.go:287] error running [docker network inspect auto-255897]: docker network inspect auto-255897: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-255897 not found
	I0110 10:09:04.897520  532948 network_create.go:289] output of [docker network inspect auto-255897]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-255897 not found
	
	** /stderr **
	I0110 10:09:04.897619  532948 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:09:04.936406  532948 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
	I0110 10:09:04.937056  532948 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-510aadcf5949 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:36:18:15:ae:b2:b8} reservation:<nil>}
	I0110 10:09:04.937299  532948 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-96506857328c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f2:61:be:81:c4:11} reservation:<nil>}
	I0110 10:09:04.937633  532948 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6ee25b83b7ac IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:4f:cc:15:87:5f} reservation:<nil>}
	I0110 10:09:04.938050  532948 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2bf40}
	I0110 10:09:04.938068  532948 network_create.go:124] attempt to create docker network auto-255897 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 10:09:04.938131  532948 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-255897 auto-255897
	I0110 10:09:05.025712  532948 network_create.go:108] docker network auto-255897 192.168.85.0/24 created
	I0110 10:09:05.025749  532948 kic.go:121] calculated static IP "192.168.85.2" for the "auto-255897" container
	I0110 10:09:05.025834  532948 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 10:09:05.040949  532948 cli_runner.go:164] Run: docker volume create auto-255897 --label name.minikube.sigs.k8s.io=auto-255897 --label created_by.minikube.sigs.k8s.io=true
	I0110 10:09:05.059134  532948 oci.go:103] Successfully created a docker volume auto-255897
	I0110 10:09:05.059225  532948 cli_runner.go:164] Run: docker run --rm --name auto-255897-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-255897 --entrypoint /usr/bin/test -v auto-255897:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 10:09:05.689028  532948 oci.go:107] Successfully prepared a docker volume auto-255897
	I0110 10:09:05.689088  532948 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:05.689097  532948 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 10:09:05.689176  532948 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-255897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 10:09:08.703109  532948 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-255897:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.013894847s)
	I0110 10:09:08.703137  532948 kic.go:203] duration metric: took 3.014035492s to extract preloaded images to volume ...
	W0110 10:09:08.703256  532948 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 10:09:08.703356  532948 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 10:09:08.814933  532948 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-255897 --name auto-255897 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-255897 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-255897 --network auto-255897 --ip 192.168.85.2 --volume auto-255897:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 10:09:09.186280  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Running}}
	I0110 10:09:09.235249  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Status}}
	I0110 10:09:09.289926  532948 cli_runner.go:164] Run: docker exec auto-255897 stat /var/lib/dpkg/alternatives/iptables
	I0110 10:09:09.359578  532948 oci.go:144] the created container "auto-255897" has a running status.
	I0110 10:09:09.359604  532948 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa...
	I0110 10:09:09.630594  532942 provision.go:177] copyRemoteCerts
	I0110 10:09:09.630678  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:09:09.630730  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:09.683060  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:09.858586  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:09:09.880936  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 10:09:09.909681  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:09:09.947621  532942 provision.go:87] duration metric: took 1.070407713s to configureAuth
	I0110 10:09:09.947652  532942 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:09:09.947865  532942 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:09.947973  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:09.968485  532942 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:09.968815  532942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33464 <nil> <nil>}
	I0110 10:09:09.968841  532942 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:09:10.314385  532942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:09:10.314408  532942 machine.go:97] duration metric: took 5.05934622s to provisionDockerMachine
	I0110 10:09:10.314422  532942 start.go:293] postStartSetup for "newest-cni-474984" (driver="docker")
	I0110 10:09:10.314432  532942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:09:10.314507  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:09:10.314556  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.332672  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.436558  532942 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:09:10.440101  532942 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:09:10.440129  532942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:09:10.440141  532942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:09:10.440199  532942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:09:10.440288  532942 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:09:10.440400  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:09:10.447987  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:10.466524  532942 start.go:296] duration metric: took 152.087245ms for postStartSetup
	I0110 10:09:10.466608  532942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:09:10.466654  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.483562  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.581440  532942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:09:10.586009  532942 fix.go:56] duration metric: took 5.824976176s for fixHost
	I0110 10:09:10.586036  532942 start.go:83] releasing machines lock for "newest-cni-474984", held for 5.825026416s
	I0110 10:09:10.586107  532942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-474984
	I0110 10:09:10.602607  532942 ssh_runner.go:195] Run: cat /version.json
	I0110 10:09:10.602668  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.602932  532942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:09:10.602985  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:10.623503  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.636593  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:10.733036  532942 ssh_runner.go:195] Run: systemctl --version
	I0110 10:09:10.877277  532942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:09:10.936292  532942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:09:10.942400  532942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:09:10.942553  532942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:09:10.960847  532942 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 10:09:10.960921  532942 start.go:496] detecting cgroup driver to use...
	I0110 10:09:10.960965  532942 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:09:10.961039  532942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:09:10.988249  532942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:09:11.004109  532942 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:09:11.004265  532942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:09:11.028749  532942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:09:11.050492  532942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:09:11.199753  532942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:09:11.334386  532942 docker.go:234] disabling docker service ...
	I0110 10:09:11.334462  532942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:09:11.350711  532942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:09:11.366743  532942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:09:11.513189  532942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:09:11.641476  532942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:09:11.655163  532942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:09:11.669089  532942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:09:11.669209  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.679428  532942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:09:11.679504  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.688314  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.697019  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.705847  532942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:09:11.713863  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.722950  532942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.731360  532942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:11.740559  532942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:09:11.748405  532942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:09:11.755992  532942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:11.873435  532942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:09:12.052964  532942 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:09:12.053088  532942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:09:12.057081  532942 start.go:574] Will wait 60s for crictl version
	I0110 10:09:12.057176  532942 ssh_runner.go:195] Run: which crictl
	I0110 10:09:12.060859  532942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:09:12.086258  532942 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:09:12.086343  532942 ssh_runner.go:195] Run: crio --version
	I0110 10:09:12.118872  532942 ssh_runner.go:195] Run: crio --version
	I0110 10:09:12.149631  532942 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:09:12.152545  532942 cli_runner.go:164] Run: docker network inspect newest-cni-474984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:09:12.168480  532942 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 10:09:12.173014  532942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:12.187857  532942 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 10:09:12.190623  532942 kubeadm.go:884] updating cluster {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:09:12.190763  532942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:12.190832  532942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:12.229719  532942 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:12.229743  532942 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:09:12.229802  532942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:12.255437  532942 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:12.255463  532942 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:09:12.255471  532942 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 10:09:12.255621  532942 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-474984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:09:12.255763  532942 ssh_runner.go:195] Run: crio config
	I0110 10:09:12.306318  532942 cni.go:84] Creating CNI manager for ""
	I0110 10:09:12.306342  532942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:12.306362  532942 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 10:09:12.306390  532942 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-474984 NodeName:newest-cni-474984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:09:12.306524  532942 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-474984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:09:12.306606  532942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:09:12.314647  532942 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:09:12.314731  532942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:09:12.322789  532942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 10:09:12.335992  532942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:09:12.349007  532942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 10:09:12.361854  532942 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:09:12.365722  532942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:12.377072  532942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:12.498696  532942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:09:12.515087  532942 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984 for IP: 192.168.76.2
	I0110 10:09:12.515107  532942 certs.go:195] generating shared ca certs ...
	I0110 10:09:12.515122  532942 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:12.515291  532942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:09:12.515354  532942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:09:12.515367  532942 certs.go:257] generating profile certs ...
	I0110 10:09:12.515474  532942 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/client.key
	I0110 10:09:12.515549  532942 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key.168eb993
	I0110 10:09:12.515604  532942 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key
	I0110 10:09:12.515738  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:09:12.515787  532942 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:09:12.516075  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:09:12.516155  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:09:12.516195  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:09:12.516224  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:09:12.516292  532942 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:12.517652  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:09:12.542076  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:09:12.562399  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:09:12.582763  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:09:12.607334  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 10:09:12.628931  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 10:09:12.649371  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:09:12.672184  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/newest-cni-474984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 10:09:12.698386  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:09:12.726594  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:09:12.745613  532942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:09:12.764207  532942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:09:12.777368  532942 ssh_runner.go:195] Run: openssl version
	I0110 10:09:12.783596  532942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.792445  532942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:09:12.800643  532942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.804595  532942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.804731  532942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:09:12.846586  532942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:09:12.854069  532942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.861658  532942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:09:12.869399  532942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.873499  532942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.873568  532942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:12.915511  532942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:09:12.923091  532942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.930401  532942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:09:12.938113  532942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.941943  532942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.942028  532942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:09:12.984426  532942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:09:12.992414  532942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:09:12.996464  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 10:09:13.044414  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 10:09:13.085785  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 10:09:13.131501  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 10:09:13.220473  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 10:09:13.305443  532942 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 10:09:13.369895  532942 kubeadm.go:401] StartCluster: {Name:newest-cni-474984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-474984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:13.369992  532942 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:09:13.370077  532942 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:09:13.537905  532942 cri.go:96] found id: "c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039"
	I0110 10:09:13.537924  532942 cri.go:96] found id: "b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609"
	I0110 10:09:13.537928  532942 cri.go:96] found id: "97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b"
	I0110 10:09:13.537932  532942 cri.go:96] found id: "42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39"
	I0110 10:09:13.537935  532942 cri.go:96] found id: ""
	I0110 10:09:13.537985  532942 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 10:09:13.550915  532942 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T10:09:13Z" level=error msg="open /run/runc: no such file or directory"
	I0110 10:09:13.550989  532942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:09:13.568155  532942 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 10:09:13.568176  532942 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 10:09:13.568228  532942 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 10:09:13.599625  532942 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 10:09:13.600013  532942 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-474984" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:09:13.600097  532942 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-308033/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-474984" cluster setting kubeconfig missing "newest-cni-474984" context setting]
	I0110 10:09:13.600353  532942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:13.601595  532942 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 10:09:13.633294  532942 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 10:09:13.633328  532942 kubeadm.go:602] duration metric: took 65.146195ms to restartPrimaryControlPlane
	I0110 10:09:13.633338  532942 kubeadm.go:403] duration metric: took 263.455591ms to StartCluster
	I0110 10:09:13.633354  532942 settings.go:142] acquiring lock: {Name:mk18ca21f9c14e41d156674a9fda822977b8007d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:13.633417  532942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 10:09:13.633999  532942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/kubeconfig: {Name:mkfe3837c161f335848e276ede3d9886cf922c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:13.634222  532942 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 10:09:13.634520  532942 config.go:182] Loaded profile config "newest-cni-474984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:13.634565  532942 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 10:09:13.634630  532942 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-474984"
	I0110 10:09:13.634647  532942 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-474984"
	W0110 10:09:13.634653  532942 addons.go:248] addon storage-provisioner should already be in state true
	I0110 10:09:13.634679  532942 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:13.635302  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.635786  532942 addons.go:70] Setting dashboard=true in profile "newest-cni-474984"
	I0110 10:09:13.635801  532942 addons.go:239] Setting addon dashboard=true in "newest-cni-474984"
	W0110 10:09:13.635807  532942 addons.go:248] addon dashboard should already be in state true
	I0110 10:09:13.635827  532942 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:13.636226  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.647053  532942 addons.go:70] Setting default-storageclass=true in profile "newest-cni-474984"
	I0110 10:09:13.647178  532942 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-474984"
	I0110 10:09:13.648054  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.660633  532942 out.go:179] * Verifying Kubernetes components...
	I0110 10:09:13.676734  532942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:13.717944  532942 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 10:09:13.722421  532942 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:09:13.722449  532942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 10:09:13.722515  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:13.740793  532942 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 10:09:13.745753  532942 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 10:09:13.748681  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 10:09:13.748706  532942 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 10:09:13.748781  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:13.761358  532942 addons.go:239] Setting addon default-storageclass=true in "newest-cni-474984"
	W0110 10:09:13.761379  532942 addons.go:248] addon default-storageclass should already be in state true
	I0110 10:09:13.761402  532942 host.go:66] Checking if "newest-cni-474984" exists ...
	I0110 10:09:13.768948  532942 cli_runner.go:164] Run: docker container inspect newest-cni-474984 --format={{.State.Status}}
	I0110 10:09:13.812701  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:13.835610  532942 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 10:09:13.835642  532942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 10:09:13.835702  532942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-474984
	I0110 10:09:13.844854  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:13.924690  532942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33464 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/newest-cni-474984/id_rsa Username:docker}
	I0110 10:09:14.052617  532942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:09:14.115248  532942 api_server.go:52] waiting for apiserver process to appear ...
	I0110 10:09:14.115388  532942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 10:09:14.130392  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 10:09:14.130418  532942 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 10:09:14.167095  532942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 10:09:14.193455  532942 api_server.go:72] duration metric: took 559.198481ms to wait for apiserver process to appear ...
	I0110 10:09:14.193481  532942 api_server.go:88] waiting for apiserver healthz status ...
	I0110 10:09:14.193499  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:14.203113  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 10:09:14.203134  532942 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 10:09:14.246831  532942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 10:09:14.298904  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 10:09:14.298930  532942 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 10:09:14.412279  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 10:09:14.412310  532942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 10:09:09.835438  532948 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 10:09:09.856689  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Status}}
	I0110 10:09:09.876683  532948 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 10:09:09.876702  532948 kic_runner.go:114] Args: [docker exec --privileged auto-255897 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 10:09:09.924217  532948 cli_runner.go:164] Run: docker container inspect auto-255897 --format={{.State.Status}}
	I0110 10:09:09.946429  532948 machine.go:94] provisionDockerMachine start ...
	I0110 10:09:09.946514  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:09.966053  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:09.966383  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:09.966392  532948 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 10:09:09.967002  532948 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52640->127.0.0.1:33469: read: connection reset by peer
	I0110 10:09:13.136619  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-255897
	
	I0110 10:09:13.136690  532948 ubuntu.go:182] provisioning hostname "auto-255897"
	I0110 10:09:13.136769  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:13.167847  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:13.168243  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:13.168257  532948 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-255897 && echo "auto-255897" | sudo tee /etc/hostname
	I0110 10:09:13.375002  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-255897
	
	I0110 10:09:13.375132  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:13.398520  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:13.398834  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:13.398849  532948 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-255897' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-255897/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-255897' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 10:09:13.572343  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 10:09:13.572403  532948 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-308033/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-308033/.minikube}
	I0110 10:09:13.572440  532948 ubuntu.go:190] setting up certificates
	I0110 10:09:13.572463  532948 provision.go:84] configureAuth start
	I0110 10:09:13.572566  532948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-255897
	I0110 10:09:13.594268  532948 provision.go:143] copyHostCerts
	I0110 10:09:13.594336  532948 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem, removing ...
	I0110 10:09:13.594345  532948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem
	I0110 10:09:13.594410  532948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/cert.pem (1123 bytes)
	I0110 10:09:13.594497  532948 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem, removing ...
	I0110 10:09:13.594503  532948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem
	I0110 10:09:13.594529  532948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/key.pem (1675 bytes)
	I0110 10:09:13.594589  532948 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem, removing ...
	I0110 10:09:13.594594  532948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem
	I0110 10:09:13.594617  532948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-308033/.minikube/ca.pem (1082 bytes)
	I0110 10:09:13.594670  532948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem org=jenkins.auto-255897 san=[127.0.0.1 192.168.85.2 auto-255897 localhost minikube]
	I0110 10:09:14.414295  532948 provision.go:177] copyRemoteCerts
	I0110 10:09:14.414364  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 10:09:14.414405  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:14.435906  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:14.545747  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 10:09:14.573322  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 10:09:14.598385  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 10:09:14.623628  532948 provision.go:87] duration metric: took 1.051107198s to configureAuth
	I0110 10:09:14.623700  532948 ubuntu.go:206] setting minikube options for container-runtime
	I0110 10:09:14.623938  532948 config.go:182] Loaded profile config "auto-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 10:09:14.624105  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:14.660740  532948 main.go:144] libmachine: Using SSH client type: native
	I0110 10:09:14.661049  532948 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33469 <nil> <nil>}
	I0110 10:09:14.661062  532948 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 10:09:15.081545  532948 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 10:09:15.081574  532948 machine.go:97] duration metric: took 5.135125852s to provisionDockerMachine
	I0110 10:09:15.081584  532948 client.go:176] duration metric: took 10.221379796s to LocalClient.Create
	I0110 10:09:15.081598  532948 start.go:167] duration metric: took 10.221432949s to libmachine.API.Create "auto-255897"
	I0110 10:09:15.081605  532948 start.go:293] postStartSetup for "auto-255897" (driver="docker")
	I0110 10:09:15.081659  532948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 10:09:15.081751  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 10:09:15.081810  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.110196  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.240237  532948 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 10:09:15.243560  532948 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 10:09:15.243598  532948 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 10:09:15.243610  532948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/addons for local assets ...
	I0110 10:09:15.243675  532948 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-308033/.minikube/files for local assets ...
	I0110 10:09:15.243755  532948 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem -> 3098982.pem in /etc/ssl/certs
	I0110 10:09:15.243859  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 10:09:15.260976  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:15.290492  532948 start.go:296] duration metric: took 208.870899ms for postStartSetup
	I0110 10:09:15.290886  532948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-255897
	I0110 10:09:15.323106  532948 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/config.json ...
	I0110 10:09:15.323390  532948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 10:09:15.323446  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.354618  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.472952  532948 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 10:09:15.478569  532948 start.go:128] duration metric: took 10.626151086s to createHost
	I0110 10:09:15.478590  532948 start.go:83] releasing machines lock for "auto-255897", held for 10.626490706s
	I0110 10:09:15.478661  532948 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-255897
	I0110 10:09:15.506210  532948 ssh_runner.go:195] Run: cat /version.json
	I0110 10:09:15.506261  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.507408  532948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 10:09:15.507470  532948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-255897
	I0110 10:09:15.550326  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.560751  532948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33469 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/auto-255897/id_rsa Username:docker}
	I0110 10:09:15.798184  532948 ssh_runner.go:195] Run: systemctl --version
	I0110 10:09:15.808719  532948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 10:09:15.872381  532948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 10:09:15.880551  532948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 10:09:15.880627  532948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 10:09:15.937769  532948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 10:09:15.937796  532948 start.go:496] detecting cgroup driver to use...
	I0110 10:09:15.937830  532948 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 10:09:15.937883  532948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 10:09:15.962624  532948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 10:09:15.981961  532948 docker.go:218] disabling cri-docker service (if available) ...
	I0110 10:09:15.982035  532948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 10:09:16.016610  532948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 10:09:16.041101  532948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 10:09:16.255460  532948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 10:09:16.429325  532948 docker.go:234] disabling docker service ...
	I0110 10:09:16.429414  532948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 10:09:16.459828  532948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 10:09:16.476787  532948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 10:09:16.655281  532948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 10:09:16.808258  532948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 10:09:16.832341  532948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 10:09:16.852094  532948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 10:09:16.852174  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.861806  532948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 10:09:16.861887  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.871817  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.881478  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.891076  532948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 10:09:16.899898  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.909462  532948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.927241  532948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 10:09:16.936902  532948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 10:09:16.945571  532948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 10:09:16.953860  532948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:17.110061  532948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 10:09:17.304198  532948 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 10:09:17.304280  532948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 10:09:17.311200  532948 start.go:574] Will wait 60s for crictl version
	I0110 10:09:17.311281  532948 ssh_runner.go:195] Run: which crictl
	I0110 10:09:17.314977  532948 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 10:09:17.347273  532948 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 10:09:17.347370  532948 ssh_runner.go:195] Run: crio --version
	I0110 10:09:17.383485  532948 ssh_runner.go:195] Run: crio --version
	I0110 10:09:17.423556  532948 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 10:09:14.562647  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 10:09:14.562673  532942 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 10:09:14.608888  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 10:09:14.608914  532942 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 10:09:14.653927  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 10:09:14.653963  532942 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 10:09:14.686874  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 10:09:14.686903  532942 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 10:09:14.719555  532942 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:09:14.719582  532942 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 10:09:14.746155  532942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 10:09:17.850526  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 10:09:17.850563  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 10:09:17.850576  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:18.068660  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 10:09:18.068697  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 10:09:18.193913  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:18.355250  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 10:09:18.355284  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 10:09:18.693989  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:18.729597  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:18.729626  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:19.194246  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:19.249341  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:19.249367  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:17.426481  532948 cli_runner.go:164] Run: docker network inspect auto-255897 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 10:09:17.447711  532948 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 10:09:17.451467  532948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:17.461083  532948 kubeadm.go:884] updating cluster {Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 10:09:17.461200  532948 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 10:09:17.461255  532948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:17.536290  532948 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:17.536311  532948 crio.go:433] Images already preloaded, skipping extraction
	I0110 10:09:17.536367  532948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 10:09:17.578824  532948 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 10:09:17.578850  532948 cache_images.go:86] Images are preloaded, skipping loading
	I0110 10:09:17.578859  532948 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 10:09:17.578946  532948 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-255897 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 10:09:17.579029  532948 ssh_runner.go:195] Run: crio config
	I0110 10:09:17.634338  532948 cni.go:84] Creating CNI manager for ""
	I0110 10:09:17.634365  532948 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 10:09:17.634382  532948 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 10:09:17.634414  532948 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-255897 NodeName:auto-255897 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 10:09:17.634539  532948 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-255897"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 10:09:17.634928  532948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 10:09:17.649591  532948 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 10:09:17.649666  532948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 10:09:17.660198  532948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0110 10:09:17.674869  532948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 10:09:17.689836  532948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0110 10:09:17.704972  532948 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 10:09:17.709148  532948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 10:09:17.719779  532948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 10:09:17.933228  532948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 10:09:17.957455  532948 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897 for IP: 192.168.85.2
	I0110 10:09:17.957476  532948 certs.go:195] generating shared ca certs ...
	I0110 10:09:17.957492  532948 certs.go:227] acquiring lock for ca certs: {Name:mkd56d8f7b7bf217e39e41937a4490684309bee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:17.957657  532948 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key
	I0110 10:09:17.957712  532948 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key
	I0110 10:09:17.957725  532948 certs.go:257] generating profile certs ...
	I0110 10:09:17.957785  532948 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.key
	I0110 10:09:17.957802  532948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt with IP's: []
	I0110 10:09:18.422010  532948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt ...
	I0110 10:09:18.422043  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: {Name:mkbdd1e5d354af40e3def7e5120c2d0a5b35219f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.422255  532948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.key ...
	I0110 10:09:18.422268  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.key: {Name:mkb0ad2602398105e2bc139c934dcb89906ddcd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.422367  532948 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9
	I0110 10:09:18.422386  532948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 10:09:18.637073  532948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9 ...
	I0110 10:09:18.637143  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9: {Name:mka67f183f878d82012709b80daf4b0f5ba25843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.637353  532948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9 ...
	I0110 10:09:18.637389  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9: {Name:mkb0ac16fc9918ca61ed9685f7253669dbaec5ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.637512  532948 certs.go:382] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt.a91dcfd9 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt
	I0110 10:09:18.637623  532948 certs.go:386] copying /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key.a91dcfd9 -> /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key
	I0110 10:09:18.637718  532948 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key
	I0110 10:09:18.637760  532948 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt with IP's: []
	I0110 10:09:18.894825  532948 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt ...
	I0110 10:09:18.894897  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt: {Name:mkc5dd1526f2949ddf8542d9e5cf276dda872257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.895110  532948 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key ...
	I0110 10:09:18.895147  532948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key: {Name:mk817d6047745a2294b381151d796eda203486e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 10:09:18.895367  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem (1338 bytes)
	W0110 10:09:18.895441  532948 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898_empty.pem, impossibly tiny 0 bytes
	I0110 10:09:18.895468  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 10:09:18.895522  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/ca.pem (1082 bytes)
	I0110 10:09:18.895574  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/cert.pem (1123 bytes)
	I0110 10:09:18.895637  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/certs/key.pem (1675 bytes)
	I0110 10:09:18.895711  532948 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem (1708 bytes)
	I0110 10:09:18.896290  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 10:09:18.937602  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 10:09:18.965021  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 10:09:19.008720  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 10:09:19.051688  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0110 10:09:19.080925  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 10:09:19.116559  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 10:09:19.147017  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 10:09:19.177623  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/certs/309898.pem --> /usr/share/ca-certificates/309898.pem (1338 bytes)
	I0110 10:09:19.208646  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/ssl/certs/3098982.pem --> /usr/share/ca-certificates/3098982.pem (1708 bytes)
	I0110 10:09:19.240974  532948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-308033/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 10:09:19.270955  532948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 10:09:19.290703  532948 ssh_runner.go:195] Run: openssl version
	I0110 10:09:19.296921  532948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.308663  532948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3098982.pem /etc/ssl/certs/3098982.pem
	I0110 10:09:19.322278  532948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.326988  532948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 09:17 /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.327096  532948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3098982.pem
	I0110 10:09:19.374458  532948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 10:09:19.385211  532948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3098982.pem /etc/ssl/certs/3ec20f2e.0
	I0110 10:09:19.393379  532948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.403297  532948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 10:09:19.413717  532948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.419066  532948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 09:13 /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.419172  532948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 10:09:19.472674  532948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 10:09:19.485529  532948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 10:09:19.495975  532948 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.510650  532948 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/309898.pem /etc/ssl/certs/309898.pem
	I0110 10:09:19.519362  532948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.525233  532948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 09:17 /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.525331  532948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309898.pem
	I0110 10:09:19.582591  532948 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 10:09:19.591389  532948 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/309898.pem /etc/ssl/certs/51391683.0
	I0110 10:09:19.602584  532948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 10:09:19.610361  532948 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 10:09:19.610438  532948 kubeadm.go:401] StartCluster: {Name:auto-255897 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-255897 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 10:09:19.610529  532948 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 10:09:19.610663  532948 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 10:09:19.694784  532948 cri.go:96] found id: ""
	I0110 10:09:19.694890  532948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 10:09:19.717667  532948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 10:09:19.744463  532948 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 10:09:19.744605  532948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 10:09:19.761060  532948 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 10:09:19.761082  532948 kubeadm.go:158] found existing configuration files:
	
	I0110 10:09:19.761157  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 10:09:19.772394  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 10:09:19.772517  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 10:09:19.782911  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 10:09:19.800139  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 10:09:19.800240  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 10:09:19.814396  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 10:09:19.828775  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 10:09:19.828872  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 10:09:19.843028  532948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 10:09:19.854938  532948 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 10:09:19.855036  532948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 10:09:19.866477  532948 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 10:09:19.925176  532948 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 10:09:19.927069  532948 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 10:09:20.066592  532948 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 10:09:20.066699  532948 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 10:09:20.066766  532948 kubeadm.go:319] OS: Linux
	I0110 10:09:20.066836  532948 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 10:09:20.066905  532948 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 10:09:20.066976  532948 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 10:09:20.067045  532948 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 10:09:20.067114  532948 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 10:09:20.067183  532948 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 10:09:20.067249  532948 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 10:09:20.067319  532948 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 10:09:20.067384  532948 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 10:09:20.180951  532948 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 10:09:20.181143  532948 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 10:09:20.181283  532948 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 10:09:20.197499  532948 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 10:09:19.693866  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:19.715210  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:19.715239  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:20.193979  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:20.237307  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 10:09:20.237338  532942 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 10:09:20.623485  532942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.456355456s)
	I0110 10:09:20.623547  532942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.376695997s)
	I0110 10:09:20.623921  532942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.877723583s)
	I0110 10:09:20.627398  532942 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-474984 addons enable metrics-server
	
	I0110 10:09:20.644359  532942 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 10:09:20.647494  532942 addons.go:530] duration metric: took 7.012913651s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 10:09:20.693805  532942 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 10:09:20.703971  532942 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 10:09:20.705482  532942 api_server.go:141] control plane version: v1.35.0
	I0110 10:09:20.705550  532942 api_server.go:131] duration metric: took 6.51206271s to wait for apiserver health ...
	I0110 10:09:20.705581  532942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 10:09:20.709957  532942 system_pods.go:59] 9 kube-system pods found
	I0110 10:09:20.710033  532942 system_pods.go:61] "coredns-7d764666f9-p8q4j" [a9749369-8007-4ae4-ae1f-59587fbc22a1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:09:20.710059  532942 system_pods.go:61] "coredns-7d764666f9-xpfml" [eb84126e-280a-465e-8285-c77ea1e49de4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:09:20.710101  532942 system_pods.go:61] "etcd-newest-cni-474984" [738613df-396f-4911-8345-f8011471a0b2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 10:09:20.710123  532942 system_pods.go:61] "kindnet-92rlc" [f8e102eb-cf98-403c-9e68-b249d36ea4eb] Running
	I0110 10:09:20.710147  532942 system_pods.go:61] "kube-apiserver-newest-cni-474984" [c64c2fc1-0d92-4d38-a4ca-63d9439cffdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 10:09:20.710187  532942 system_pods.go:61] "kube-controller-manager-newest-cni-474984" [55f26b47-a82c-4ade-9fad-9f806091d48a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 10:09:20.710208  532942 system_pods.go:61] "kube-proxy-fpllw" [bc315022-efa7-4370-896c-36d094209e88] Running
	I0110 10:09:20.710231  532942 system_pods.go:61] "kube-scheduler-newest-cni-474984" [8e056967-7cc7-4079-80dd-f856af7e8343] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 10:09:20.710290  532942 system_pods.go:61] "storage-provisioner" [19c1c419-c666-41b9-94ed-e8e852e9f2e4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 10:09:20.710311  532942 system_pods.go:74] duration metric: took 4.70985ms to wait for pod list to return data ...
	I0110 10:09:20.710335  532942 default_sa.go:34] waiting for default service account to be created ...
	I0110 10:09:20.713207  532942 default_sa.go:45] found service account: "default"
	I0110 10:09:20.713257  532942 default_sa.go:55] duration metric: took 2.884695ms for default service account to be created ...
	I0110 10:09:20.713307  532942 kubeadm.go:587] duration metric: took 7.079046466s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 10:09:20.713352  532942 node_conditions.go:102] verifying NodePressure condition ...
	I0110 10:09:20.716313  532942 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 10:09:20.716369  532942 node_conditions.go:123] node cpu capacity is 2
	I0110 10:09:20.716413  532942 node_conditions.go:105] duration metric: took 3.037459ms to run NodePressure ...
	I0110 10:09:20.716446  532942 start.go:242] waiting for startup goroutines ...
	I0110 10:09:20.716488  532942 start.go:247] waiting for cluster config update ...
	I0110 10:09:20.716563  532942 start.go:256] writing updated cluster config ...
	I0110 10:09:20.716917  532942 ssh_runner.go:195] Run: rm -f paused
	I0110 10:09:20.806280  532942 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 10:09:20.810643  532942 out.go:203] 
	W0110 10:09:20.813586  532942 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 10:09:20.816483  532942 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 10:09:20.819550  532942 out.go:179] * Done! kubectl is now configured to use "newest-cni-474984" cluster and "default" namespace by default
	I0110 10:09:20.200900  532948 out.go:252]   - Generating certificates and keys ...
	I0110 10:09:20.201060  532948 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 10:09:20.201185  532948 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 10:09:20.318503  532948 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 10:09:20.526269  532948 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 10:09:20.744279  532948 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 10:09:20.925492  532948 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 10:09:21.341909  532948 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 10:09:21.342170  532948 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-255897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:09:21.969205  532948 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 10:09:21.969474  532948 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-255897 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 10:09:22.710852  532948 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 10:09:23.839438  532948 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 10:09:24.179601  532948 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 10:09:24.179671  532948 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 10:09:24.332867  532948 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 10:09:24.566768  532948 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 10:09:24.641138  532948 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 10:09:24.706585  532948 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 10:09:25.270689  532948 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 10:09:25.271933  532948 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 10:09:25.275142  532948 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.009624106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.014786585Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-fpllw/POD" id=93ecc5ea-2ec6-4d08-be20-2255fa558b60 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.01490821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.055528001Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=244afb01-9dfc-47da-ade0-942d74d1fc4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.056891632Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=93ecc5ea-2ec6-4d08-be20-2255fa558b60 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.06824317Z" level=info msg="Ran pod sandbox 89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316 with infra container: kube-system/kube-proxy-fpllw/POD" id=93ecc5ea-2ec6-4d08-be20-2255fa558b60 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.069492299Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=d1dca9ec-fa58-44c4-9275-559a768cb5cf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.072436949Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=19c367e7-8d20-41ec-a358-0abe04eeb52c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.077960726Z" level=info msg="Creating container: kube-system/kube-proxy-fpllw/kube-proxy" id=80b218a7-df00-4bb0-bf4e-8a022efc3d56 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.078272817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.078318512Z" level=info msg="Ran pod sandbox d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf with infra container: kube-system/kindnet-92rlc/POD" id=244afb01-9dfc-47da-ade0-942d74d1fc4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.084676195Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=603871ff-7f0c-42d4-ac04-664459cb8a17 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.085663054Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=fa94138e-5650-4420-8dd9-1b1b714699e2 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.086650462Z" level=info msg="Creating container: kube-system/kindnet-92rlc/kindnet-cni" id=38f2c7c9-9ee9-4b2b-9c3b-a97982c1f81a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.086773179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.094732623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.095291632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.10940469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.110053849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.190499557Z" level=info msg="Created container d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f: kube-system/kindnet-92rlc/kindnet-cni" id=38f2c7c9-9ee9-4b2b-9c3b-a97982c1f81a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.193790342Z" level=info msg="Starting container: d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f" id=d8761dc8-f576-485b-a244-f439ac07e464 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.197554851Z" level=info msg="Created container 1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273: kube-system/kube-proxy-fpllw/kube-proxy" id=80b218a7-df00-4bb0-bf4e-8a022efc3d56 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.201589917Z" level=info msg="Starting container: 1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273" id=0b84baf8-fe11-49c6-8c5f-b625decf0dee name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.207067417Z" level=info msg="Started container" PID=1072 containerID=d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f description=kube-system/kindnet-92rlc/kindnet-cni id=d8761dc8-f576-485b-a244-f439ac07e464 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf
	Jan 10 10:09:19 newest-cni-474984 crio[614]: time="2026-01-10T10:09:19.216188597Z" level=info msg="Started container" PID=1073 containerID=1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273 description=kube-system/kube-proxy-fpllw/kube-proxy id=0b84baf8-fe11-49c6-8c5f-b625decf0dee name=/runtime.v1.RuntimeService/StartContainer sandboxID=89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1b0eb1a1125bf       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   9 seconds ago       Running             kube-proxy                1                   89a56e3f98463       kube-proxy-fpllw                            kube-system
	d4cb6b35f756f       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   9 seconds ago       Running             kindnet-cni               1                   d41e7080a2dc5       kindnet-92rlc                               kube-system
	c04536f0d830e       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   14 seconds ago      Running             kube-controller-manager   1                   fa59d0aa98998       kube-controller-manager-newest-cni-474984   kube-system
	b7bd726e240ea       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   14 seconds ago      Running             kube-scheduler            1                   5dea18f9a3373       kube-scheduler-newest-cni-474984            kube-system
	97be5a2a78c38       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   14 seconds ago      Running             kube-apiserver            1                   c68bc80e9c200       kube-apiserver-newest-cni-474984            kube-system
	42bb52a58dfd6       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   14 seconds ago      Running             etcd                      1                   edd86bc8e4995       etcd-newest-cni-474984                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-474984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-474984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=newest-cni-474984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T10_08_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 10:08:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-474984
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 10:09:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 10:09:18 +0000   Sat, 10 Jan 2026 10:08:42 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-474984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                ec551f0b-0c63-4d9f-9877-0a8f892afcb7
	  Boot ID:                    93192e55-0c5a-4c17-9b8e-aaade49ef0ff
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-474984                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-92rlc                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-474984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-474984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-fpllw                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-474984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  35s   node-controller  Node newest-cni-474984 event: Registered Node newest-cni-474984 in Controller
	  Normal  RegisteredNode  6s    node-controller  Node newest-cni-474984 event: Registered Node newest-cni-474984 in Controller
	
	
	==> dmesg <==
	[ +23.140139] overlayfs: idmapped layers are currently not supported
	[  +9.671443] overlayfs: idmapped layers are currently not supported
	[Jan10 09:40] overlayfs: idmapped layers are currently not supported
	[ +16.233052] overlayfs: idmapped layers are currently not supported
	[Jan10 09:41] overlayfs: idmapped layers are currently not supported
	[ +33.829030] overlayfs: idmapped layers are currently not supported
	[Jan10 09:43] overlayfs: idmapped layers are currently not supported
	[Jan10 09:45] overlayfs: idmapped layers are currently not supported
	[ +31.841619] overlayfs: idmapped layers are currently not supported
	[Jan10 09:51] overlayfs: idmapped layers are currently not supported
	[Jan10 09:52] overlayfs: idmapped layers are currently not supported
	[Jan10 09:53] overlayfs: idmapped layers are currently not supported
	[Jan10 09:54] overlayfs: idmapped layers are currently not supported
	[Jan10 10:00] overlayfs: idmapped layers are currently not supported
	[Jan10 10:01] overlayfs: idmapped layers are currently not supported
	[Jan10 10:02] overlayfs: idmapped layers are currently not supported
	[Jan10 10:03] overlayfs: idmapped layers are currently not supported
	[Jan10 10:04] overlayfs: idmapped layers are currently not supported
	[Jan10 10:06] overlayfs: idmapped layers are currently not supported
	[ +32.420107] overlayfs: idmapped layers are currently not supported
	[Jan10 10:07] overlayfs: idmapped layers are currently not supported
	[ +31.436967] overlayfs: idmapped layers are currently not supported
	[Jan10 10:08] overlayfs: idmapped layers are currently not supported
	[Jan10 10:09] overlayfs: idmapped layers are currently not supported
	[ +13.587318] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [42bb52a58dfd69d45ae514c61bb67b183558e391991a95771906a18d17419a39] <==
	{"level":"info","ts":"2026-01-10T10:09:13.853940Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T10:09:13.853995Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T10:09:13.863333Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T10:09:13.863735Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T10:09:13.863761Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T10:09:13.863823Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:09:13.863834Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T10:09:14.389487Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T10:09:14.390020Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T10:09:14.390101Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T10:09:14.390114Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:09:14.390129Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.396591Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.396649Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T10:09:14.396669Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.396688Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T10:09:14.417284Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-474984 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T10:09:14.417338Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:09:14.417357Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T10:09:14.419714Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:09:14.423809Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T10:09:14.417525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T10:09:14.423878Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T10:09:14.458485Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T10:09:14.495440Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:09:28 up  2:51,  0 user,  load average: 6.66, 3.20, 2.41
	Linux newest-cni-474984 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d4cb6b35f756fbfd63ff3639364c7b826b5c1cfe34e73d999570a1c2f189731f] <==
	I0110 10:09:19.409992       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 10:09:19.410203       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 10:09:19.410299       1 main.go:148] setting mtu 1500 for CNI 
	I0110 10:09:19.410319       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 10:09:19.410328       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T10:09:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 10:09:19.549409       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 10:09:19.549427       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 10:09:19.549436       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 10:09:19.555111       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [97be5a2a78c38d5d91cc97907b576cf5b92a3ca7d072bd074837d2e6d3d3c18b] <==
	I0110 10:09:18.541304       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 10:09:18.546122       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 10:09:18.546141       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 10:09:18.556139       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 10:09:18.568124       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:18.568152       1 policy_source.go:248] refreshing policies
	I0110 10:09:18.568215       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 10:09:18.568249       1 aggregator.go:187] initial CRD sync complete...
	I0110 10:09:18.568255       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 10:09:18.568261       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 10:09:18.568266       1 cache.go:39] Caches are synced for autoregister controller
	I0110 10:09:18.578914       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 10:09:18.627596       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 10:09:18.711290       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 10:09:18.867289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 10:09:20.015423       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 10:09:20.217154       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 10:09:20.338250       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 10:09:20.382861       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 10:09:20.551529       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.100.76"}
	I0110 10:09:20.595363       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.116.79"}
	I0110 10:09:22.486786       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 10:09:22.785462       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 10:09:22.858200       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 10:09:22.913076       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [c04536f0d830e2b002362320c09624c56206b491d85ba1ec8826ceb9d4beb039] <==
	I0110 10:09:22.312699       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312705       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312711       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312717       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312751       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312760       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312767       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312774       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.326931       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 10:09:22.327061       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-474984"
	I0110 10:09:22.327164       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 10:09:22.312780       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312795       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312801       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312807       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312821       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312827       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.312857       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.342505       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.343738       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.369768       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:09:22.481059       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.515921       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:22.515954       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 10:09:22.515963       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [1b0eb1a1125bf18b6235c7d06caa72dbdacb1e395bc49c8da6ae20a8343da273] <==
	I0110 10:09:20.000195       1 server_linux.go:53] "Using iptables proxy"
	I0110 10:09:20.239350       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:09:20.440298       1 shared_informer.go:377] "Caches are synced"
	I0110 10:09:20.440339       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 10:09:20.440421       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 10:09:20.656323       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 10:09:20.656381       1 server_linux.go:136] "Using iptables Proxier"
	I0110 10:09:20.665098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 10:09:20.665483       1 server.go:529] "Version info" version="v1.35.0"
	I0110 10:09:20.665728       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:09:20.667035       1 config.go:200] "Starting service config controller"
	I0110 10:09:20.667101       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 10:09:20.667194       1 config.go:106] "Starting endpoint slice config controller"
	I0110 10:09:20.667230       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 10:09:20.667270       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 10:09:20.667304       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 10:09:20.668105       1 config.go:309] "Starting node config controller"
	I0110 10:09:20.673182       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 10:09:20.673280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 10:09:20.767909       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 10:09:20.768008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 10:09:20.768021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b7bd726e240ea1f2186079ed096f5a99813a912fb83d95e0fcfd8b144fb14609] <==
	I0110 10:09:15.220341       1 serving.go:386] Generated self-signed cert in-memory
	W0110 10:09:18.073189       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 10:09:18.073226       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 10:09:18.073235       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 10:09:18.073242       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 10:09:18.345732       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 10:09:18.345762       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 10:09:18.395202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 10:09:18.412584       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 10:09:18.400773       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 10:09:18.400798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 10:09:18.626650       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.716978     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-474984" containerName="etcd"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.720960     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-474984" containerName="kube-apiserver"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.721066     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-474984" containerName="kube-controller-manager"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.748863     732 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.749132     732 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-474984\" already exists" pod="kube-system/kube-controller-manager-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.749153     732 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.751914     732 kubelet_node_status.go:123] "Node was previously registered" node="newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.752009     732 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.752036     732 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.757176     732 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.808079     732 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-474984\" already exists" pod="kube-system/kube-scheduler-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.808112     732 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849351     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-cni-cfg\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849398     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-lib-modules\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849422     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc315022-efa7-4370-896c-36d094209e88-lib-modules\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849454     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc315022-efa7-4370-896c-36d094209e88-xtables-lock\") pod \"kube-proxy-fpllw\" (UID: \"bc315022-efa7-4370-896c-36d094209e88\") " pod="kube-system/kube-proxy-fpllw"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.849481     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8e102eb-cf98-403c-9e68-b249d36ea4eb-xtables-lock\") pod \"kindnet-92rlc\" (UID: \"f8e102eb-cf98-403c-9e68-b249d36ea4eb\") " pod="kube-system/kindnet-92rlc"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: E0110 10:09:18.868381     732 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-474984\" already exists" pod="kube-system/etcd-newest-cni-474984"
	Jan 10 10:09:18 newest-cni-474984 kubelet[732]: I0110 10:09:18.922165     732 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 10:09:19 newest-cni-474984 kubelet[732]: W0110 10:09:19.063933     732 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/crio-d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf WatchSource:0}: Error finding container d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf: Status 404 returned error can't find the container with id d41e7080a2dc53a6e98414f6bcfa22ff08e50d03a091676d157ee9b20a746aaf
	Jan 10 10:09:19 newest-cni-474984 kubelet[732]: W0110 10:09:19.066695     732 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/fe5cd02e55d3bd2cc156642972e47b36badbc0a4b2631e940a4c24d7a3c34913/crio-89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316 WatchSource:0}: Error finding container 89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316: Status 404 returned error can't find the container with id 89a56e3f98463b08aed6364c4fc378cfd2ddc55e84c9c91a10af4f2b1b250316
	Jan 10 10:09:22 newest-cni-474984 kubelet[732]: E0110 10:09:22.340879     732 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-474984" containerName="kube-controller-manager"
	Jan 10 10:09:22 newest-cni-474984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 10:09:22 newest-cni-474984 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 10:09:22 newest-cni-474984 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474984 -n newest-cni-474984
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-474984 -n newest-cni-474984: exit status 2 (505.966259ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-474984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc: exit status 1 (107.519876ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-p8q4j" not found
	Error from server (NotFound): pods "coredns-7d764666f9-xpfml" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-6rrxz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-d5gvc" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-474984 describe pod coredns-7d764666f9-p8q4j coredns-7d764666f9-xpfml storage-provisioner dashboard-metrics-scraper-867fb5f87b-6rrxz kubernetes-dashboard-b84665fb8-d5gvc: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.84s)
E0110 10:14:22.655953  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:24.398916  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:41.348020  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:50.340755  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.075056  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.080410  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.090747  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.111115  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.151467  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.231800  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.392280  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:57.712863  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:58.087363  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:58.353842  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:14:59.634777  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:02.195064  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:07.315683  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:17.556746  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/auto-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:19.880394  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:19.885685  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:19.896002  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:19.916341  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:19.956705  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:20.037124  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:20.197615  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:20.518188  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:21.158904  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:15:22.439401  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (274/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.29
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.89
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.43
18 TestDownloadOnly/v1.35.0/DeleteAll 0.25
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 128.46
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.8
48 TestAddons/StoppedEnableDisable 12.44
49 TestCertOptions 32.01
50 TestCertExpiration 224.88
58 TestErrorSpam/setup 27.06
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 7.1
62 TestErrorSpam/unpause 6.23
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 46.68
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.16
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.98
75 TestFunctional/serial/CacheCmd/cache/add_local 1.28
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 31.07
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.39
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 13.64
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.14
97 TestFunctional/parallel/ServiceCmdConnect 8.59
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 21.92
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.41
104 TestFunctional/parallel/FileSync 0.43
105 TestFunctional/parallel/CertSync 2.36
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.23
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 8.29
130 TestFunctional/parallel/ServiceCmd/List 0.54
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
133 TestFunctional/parallel/ServiceCmd/Format 0.41
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/MountCmd/specific-port 2.35
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.6
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 0.76
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.2
144 TestFunctional/parallel/ImageCommands/Setup 0.71
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.77
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 135.25
163 TestMultiControlPlane/serial/DeployApp 6.51
164 TestMultiControlPlane/serial/PingHostFromPods 1.49
165 TestMultiControlPlane/serial/AddWorkerNode 30.42
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
168 TestMultiControlPlane/serial/CopyFile 20.28
169 TestMultiControlPlane/serial/StopSecondaryNode 12.92
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 21.39
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.29
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 202.05
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.32
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.14
176 TestMultiControlPlane/serial/StopCluster 36.04
177 TestMultiControlPlane/serial/RestartCluster 70.88
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 78.19
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 45.15
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 34.85
211 TestKicCustomNetwork/use_default_bridge_network 30.35
212 TestKicExistingNetwork 31.88
213 TestKicCustomSubnet 30.22
214 TestKicStaticIP 30.48
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 61.19
219 TestMountStart/serial/StartWithMountFirst 8.74
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.45
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.14
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 71.48
231 TestMultiNode/serial/DeployApp2Nodes 5.13
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 28.69
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.32
237 TestMultiNode/serial/StopNode 2.36
238 TestMultiNode/serial/StartAfterStop 8.54
239 TestMultiNode/serial/RestartKeepsNodes 73.07
240 TestMultiNode/serial/DeleteNode 5.94
241 TestMultiNode/serial/StopMultiNode 23.93
242 TestMultiNode/serial/RestartMultiNode 46.28
243 TestMultiNode/serial/ValidateNameConflict 30.44
250 TestScheduledStopUnix 102.69
253 TestInsufficientStorage 10.02
254 TestRunningBinaryUpgrade 311.94
256 TestKubernetesUpgrade 107.8
257 TestMissingContainerUpgrade 146.35
259 TestPause/serial/Start 51.98
260 TestPause/serial/SecondStartNoReconfiguration 17.14
262 TestStoppedBinaryUpgrade/Setup 0.76
263 TestStoppedBinaryUpgrade/Upgrade 313.11
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.87
272 TestPreload/Start-NoPreload-PullImage 70.68
273 TestPreload/Restart-With-Preload-Check-User-Image 46.24
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
277 TestNoKubernetes/serial/StartWithK8s 28.06
278 TestNoKubernetes/serial/StartWithStopK8s 10.02
279 TestNoKubernetes/serial/Start 7.52
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.48
282 TestNoKubernetes/serial/ProfileList 1.09
283 TestNoKubernetes/serial/Stop 1.3
284 TestNoKubernetes/serial/StartNoArgs 6.96
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
293 TestNetworkPlugins/group/false 3.57
298 TestStartStop/group/old-k8s-version/serial/FirstStart 60.64
299 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
301 TestStartStop/group/old-k8s-version/serial/Stop 12.04
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 55.54
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/no-preload/serial/FirstStart 55.02
310 TestStartStop/group/no-preload/serial/DeployApp 9.31
312 TestStartStop/group/no-preload/serial/Stop 12.02
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
314 TestStartStop/group/no-preload/serial/SecondStart 49.66
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
320 TestStartStop/group/embed-certs/serial/FirstStart 47.25
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.61
323 TestStartStop/group/embed-certs/serial/DeployApp 9.44
325 TestStartStop/group/embed-certs/serial/Stop 12.19
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/embed-certs/serial/SecondStart 52.81
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.39
330 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.78
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.99
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
338 TestStartStop/group/newest-cni/serial/FirstStart 33.96
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
341 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
343 TestPreload/PreloadSrc/gcs 5.36
344 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestPreload/PreloadSrc/github 4.86
347 TestStartStop/group/newest-cni/serial/Stop 4.32
348 TestPreload/PreloadSrc/gcs-cached 0.61
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
350 TestStartStop/group/newest-cni/serial/SecondStart 16.97
351 TestNetworkPlugins/group/auto/Start 51.86
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
356 TestNetworkPlugins/group/kindnet/Start 47.58
357 TestNetworkPlugins/group/auto/KubeletFlags 0.38
358 TestNetworkPlugins/group/auto/NetCatPod 12.38
359 TestNetworkPlugins/group/auto/DNS 0.17
360 TestNetworkPlugins/group/auto/Localhost 0.13
361 TestNetworkPlugins/group/auto/HairPin 0.14
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
364 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
365 TestNetworkPlugins/group/calico/Start 75.06
366 TestNetworkPlugins/group/kindnet/DNS 0.25
367 TestNetworkPlugins/group/kindnet/Localhost 0.23
368 TestNetworkPlugins/group/kindnet/HairPin 0.23
369 TestNetworkPlugins/group/custom-flannel/Start 55.35
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/calico/KubeletFlags 0.33
372 TestNetworkPlugins/group/calico/NetCatPod 11.3
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
375 TestNetworkPlugins/group/calico/DNS 0.16
376 TestNetworkPlugins/group/calico/Localhost 0.14
377 TestNetworkPlugins/group/calico/HairPin 0.13
378 TestNetworkPlugins/group/custom-flannel/DNS 0.18
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
381 TestNetworkPlugins/group/enable-default-cni/Start 70.78
382 TestNetworkPlugins/group/flannel/Start 55.37
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
385 TestNetworkPlugins/group/flannel/NetCatPod 11.26
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.35
388 TestNetworkPlugins/group/flannel/DNS 0.17
389 TestNetworkPlugins/group/flannel/Localhost 0.13
390 TestNetworkPlugins/group/flannel/HairPin 0.13
391 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
394 TestNetworkPlugins/group/bridge/Start 65.26
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
396 TestNetworkPlugins/group/bridge/NetCatPod 10.27
397 TestNetworkPlugins/group/bridge/DNS 0.14
398 TestNetworkPlugins/group/bridge/Localhost 0.14
399 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (7.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-314745 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-314745 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.286515778s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 09:13:06.099281  309898 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0110 09:13:06.099363  309898 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-314745
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-314745: exit status 85 (96.430432ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-314745 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-314745 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:12:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:12:58.853997  309903 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:12:58.854449  309903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:12:58.854463  309903 out.go:374] Setting ErrFile to fd 2...
	I0110 09:12:58.854470  309903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:12:58.854729  309903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	W0110 09:12:58.854871  309903 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22427-308033/.minikube/config/config.json: open /home/jenkins/minikube-integration/22427-308033/.minikube/config/config.json: no such file or directory
	I0110 09:12:58.855282  309903 out.go:368] Setting JSON to true
	I0110 09:12:58.856055  309903 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6928,"bootTime":1768029451,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:12:58.856121  309903 start.go:143] virtualization:  
	I0110 09:12:58.862074  309903 out.go:99] [download-only-314745] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0110 09:12:58.862285  309903 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 09:12:58.862347  309903 notify.go:221] Checking for updates...
	I0110 09:12:58.865435  309903 out.go:171] MINIKUBE_LOCATION=22427
	I0110 09:12:58.868694  309903 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:12:58.871936  309903 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:12:58.875130  309903 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:12:58.878134  309903 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 09:12:58.884020  309903 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 09:12:58.884299  309903 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:12:58.908339  309903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:12:58.908441  309903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:12:58.966518  309903 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 09:12:58.957077764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:12:58.966625  309903 docker.go:319] overlay module found
	I0110 09:12:58.969708  309903 out.go:99] Using the docker driver based on user configuration
	I0110 09:12:58.969751  309903 start.go:309] selected driver: docker
	I0110 09:12:58.969759  309903 start.go:928] validating driver "docker" against <nil>
	I0110 09:12:58.969865  309903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:12:59.018104  309903 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 09:12:59.009176771 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:12:59.018267  309903 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:12:59.018558  309903 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 09:12:59.018705  309903 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:12:59.021873  309903 out.go:171] Using Docker driver with root privileges
	I0110 09:12:59.024846  309903 cni.go:84] Creating CNI manager for ""
	I0110 09:12:59.024912  309903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 09:12:59.024925  309903 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 09:12:59.025000  309903 start.go:353] cluster config:
	{Name:download-only-314745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-314745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:12:59.028103  309903 out.go:99] Starting "download-only-314745" primary control-plane node in "download-only-314745" cluster
	I0110 09:12:59.028136  309903 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 09:12:59.031001  309903 out.go:99] Pulling base image v0.0.48-1767944074-22401 ...
	I0110 09:12:59.031039  309903 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 09:12:59.031206  309903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 09:12:59.046806  309903 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 09:12:59.046999  309903 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 09:12:59.047103  309903 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 09:12:59.081919  309903 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:12:59.081954  309903 cache.go:65] Caching tarball of preloaded images
	I0110 09:12:59.082687  309903 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 09:12:59.086052  309903 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0110 09:12:59.086072  309903 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:12:59.086078  309903 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I0110 09:12:59.180261  309903 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I0110 09:12:59.180423  309903 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 09:13:02.895352  309903 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 09:13:02.895836  309903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/download-only-314745/config.json ...
	I0110 09:13:02.895873  309903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/download-only-314745/config.json: {Name:mkea9b657e840ae9532b1abd3383d5f9e742cdec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 09:13:02.896782  309903 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 09:13:02.897029  309903 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22427-308033/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-314745 host does not exist
	  To start a cluster, run: "minikube start -p download-only-314745"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-314745
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-343990 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-343990 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.891631686s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 09:13:10.441352  309898 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I0110 09:13:10.441388  309898 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-343990
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-343990: exit status 85 (425.481066ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-314745 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-314745 │ jenkins │ v1.37.0 │ 10 Jan 26 09:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ delete  │ -p download-only-314745                                                                                                                                                   │ download-only-314745 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │ 10 Jan 26 09:13 UTC │
	│ start   │ -o=json --download-only -p download-only-343990 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-343990 │ jenkins │ v1.37.0 │ 10 Jan 26 09:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 09:13:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 09:13:06.590361  310102 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:13:06.590581  310102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:13:06.590609  310102 out.go:374] Setting ErrFile to fd 2...
	I0110 09:13:06.590630  310102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:13:06.591060  310102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:13:06.591647  310102 out.go:368] Setting JSON to true
	I0110 09:13:06.592530  310102 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6936,"bootTime":1768029451,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:13:06.592654  310102 start.go:143] virtualization:  
	I0110 09:13:06.595881  310102 out.go:99] [download-only-343990] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:13:06.596044  310102 notify.go:221] Checking for updates...
	I0110 09:13:06.598974  310102 out.go:171] MINIKUBE_LOCATION=22427
	I0110 09:13:06.602026  310102 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:13:06.604965  310102 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:13:06.607840  310102 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:13:06.610701  310102 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 09:13:06.616350  310102 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 09:13:06.616631  310102 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:13:06.652892  310102 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:13:06.653033  310102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:13:06.710419  310102 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 09:13:06.701380663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:13:06.710522  310102 docker.go:319] overlay module found
	I0110 09:13:06.713470  310102 out.go:99] Using the docker driver based on user configuration
	I0110 09:13:06.713507  310102 start.go:309] selected driver: docker
	I0110 09:13:06.713522  310102 start.go:928] validating driver "docker" against <nil>
	I0110 09:13:06.713633  310102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:13:06.776101  310102 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 09:13:06.766864525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:13:06.776257  310102 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 09:13:06.776565  310102 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 09:13:06.776722  310102 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 09:13:06.779838  310102 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-343990 host does not exist
	  To start a cluster, run: "minikube start -p download-only-343990"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-343990
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 09:13:11.960655  309898 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-661590 --alsologtostderr --binary-mirror http://127.0.0.1:44881 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-661590" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-661590
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-502860
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-502860: exit status 85 (83.575196ms)

                                                
                                                
-- stdout --
	* Profile "addons-502860" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-502860"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-502860
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-502860: exit status 85 (75.423233ms)

                                                
                                                
-- stdout --
	* Profile "addons-502860" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-502860"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (128.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-502860 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-502860 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.454937135s)
--- PASS: TestAddons/Setup (128.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-502860 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-502860 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.8s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-502860 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-502860 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b815314f-ebfe-4b8e-b8b5-a700cc12f829] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b815314f-ebfe-4b8e-b8b5-a700cc12f829] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003416036s
addons_test.go:696: (dbg) Run:  kubectl --context addons-502860 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-502860 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-502860 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-502860 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.80s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-502860
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-502860: (12.166102126s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-502860
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-502860
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-502860
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (32.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0110 10:00:22.499066  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-525619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.08244288s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-525619 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-525619 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-525619 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-525619" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-525619
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-525619: (2.062772571s)
--- PASS: TestCertOptions (32.01s)

                                                
                                    
x
+
TestCertExpiration (224.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0110 09:54:41.352693  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-599529 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.542158275s)
E0110 09:55:22.499089  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:57:44.397852  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-599529 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.916020928s)
helpers_test.go:176: Cleaning up "cert-expiration-599529" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-599529
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-599529: (2.417677351s)
--- PASS: TestCertExpiration (224.88s)

                                                
                                    
x
+
TestErrorSpam/setup (27.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-878880 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-878880 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-878880 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-878880 --driver=docker  --container-runtime=crio: (27.057060461s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (27.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (7.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause: exit status 80 (2.336289167s)

                                                
                                                
-- stdout --
	* Pausing node nospam-878880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:17:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause: exit status 80 (2.319443796s)

                                                
                                                
-- stdout --
	* Pausing node nospam-878880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:17:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause: exit status 80 (2.447115027s)

                                                
                                                
-- stdout --
	* Pausing node nospam-878880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:17:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause: exit status 80 (2.063249444s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-878880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause: exit status 80 (2.015333156s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-878880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:17:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause: exit status 80 (2.147725806s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-878880 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T09:17:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.23s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 stop: (1.326737434s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-878880 --log_dir /tmp/nospam-878880 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22427-308033/.minikube/files/etc/test/nested/copy/309898/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499282 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-499282 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (46.677820574s)
--- PASS: TestFunctional/serial/StartWithProxy (46.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 09:18:23.438670  309898 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499282 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-499282 --alsologtostderr -v=8: (29.159400844s)
functional_test.go:678: soft start took 29.159916501s for "functional-499282" cluster.
I0110 09:18:52.598754  309898 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (29.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-499282 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 cache add registry.k8s.io/pause:3.1: (1.328880452s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 cache add registry.k8s.io/pause:3.3: (1.350061928s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 cache add registry.k8s.io/pause:latest: (1.304487154s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-499282 /tmp/TestFunctionalserialCacheCmdcacheadd_local92017578/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cache add minikube-local-cache-test:functional-499282
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cache delete minikube-local-cache-test:functional-499282
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-499282
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.088998ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 kubectl -- --context functional-499282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-499282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-499282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.065202899s)
functional_test.go:776: restart took 31.065319569s for "functional-499282" cluster.
I0110 09:19:31.998200  309898 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (31.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-499282 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 logs: (1.446070653s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 logs --file /tmp/TestFunctionalserialLogsFileCmd3652200880/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 logs --file /tmp/TestFunctionalserialLogsFileCmd3652200880/001/logs.txt: (1.510134909s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-499282 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-499282
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-499282: exit status 115 (388.241201ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30278 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-499282 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 config get cpus: exit status 14 (116.481936ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 config get cpus: exit status 14 (74.853339ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-499282 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-499282 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 334055: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-499282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.335099ms)

                                                
                                                
-- stdout --
	* [functional-499282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:20:10.685546  333649 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:20:10.689926  333649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:20:10.689946  333649 out.go:374] Setting ErrFile to fd 2...
	I0110 09:20:10.689953  333649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:20:10.690373  333649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:20:10.690919  333649 out.go:368] Setting JSON to false
	I0110 09:20:10.691962  333649 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7360,"bootTime":1768029451,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:20:10.692117  333649 start.go:143] virtualization:  
	I0110 09:20:10.695931  333649 out.go:179] * [functional-499282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:20:10.699111  333649 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:20:10.699193  333649 notify.go:221] Checking for updates...
	I0110 09:20:10.704820  333649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:20:10.707776  333649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:20:10.710728  333649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:20:10.713641  333649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:20:10.716663  333649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:20:10.720204  333649 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:20:10.720825  333649 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:20:10.746801  333649 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:20:10.746931  333649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:20:10.806550  333649 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 09:20:10.796999376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:20:10.806662  333649 docker.go:319] overlay module found
	I0110 09:20:10.809804  333649 out.go:179] * Using the docker driver based on existing profile
	I0110 09:20:10.812652  333649 start.go:309] selected driver: docker
	I0110 09:20:10.812674  333649 start.go:928] validating driver "docker" against &{Name:functional-499282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-499282 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:20:10.812802  333649 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:20:10.816427  333649 out.go:203] 
	W0110 09:20:10.819147  333649 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 09:20:10.822038  333649 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499282 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-499282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-499282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (230.55635ms)

                                                
                                                
-- stdout --
	* [functional-499282] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:20:10.465847  333600 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:20:10.465981  333600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:20:10.465992  333600 out.go:374] Setting ErrFile to fd 2...
	I0110 09:20:10.465997  333600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:20:10.466381  333600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:20:10.466761  333600 out.go:368] Setting JSON to false
	I0110 09:20:10.467736  333600 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7360,"bootTime":1768029451,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:20:10.467817  333600 start.go:143] virtualization:  
	I0110 09:20:10.471355  333600 out.go:179] * [functional-499282] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0110 09:20:10.475020  333600 notify.go:221] Checking for updates...
	I0110 09:20:10.478011  333600 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:20:10.481150  333600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:20:10.483993  333600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:20:10.486874  333600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:20:10.489734  333600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:20:10.492721  333600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:20:10.496101  333600 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:20:10.496848  333600 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:20:10.534656  333600 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:20:10.535401  333600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:20:10.612723  333600 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 09:20:10.596969553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:20:10.612833  333600 docker.go:319] overlay module found
	I0110 09:20:10.617856  333600 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 09:20:10.620812  333600 start.go:309] selected driver: docker
	I0110 09:20:10.620836  333600 start.go:928] validating driver "docker" against &{Name:functional-499282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-499282 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 09:20:10.620952  333600 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:20:10.625428  333600 out.go:203] 
	W0110 09:20:10.628581  333600 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 09:20:10.631577  333600 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-499282 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-499282 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-f5pv4" [761f136f-9dca-43fe-8c0f-bcde8d88c8c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-f5pv4" [761f136f-9dca-43fe-8c0f-bcde8d88c8c7] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004081398s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30355
functional_test.go:1685: http://192.168.49.2:30355: success! body:
Request served by hello-node-connect-5d95464fd4-f5pv4

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30355
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [be2f9115-434f-4171-b241-f83144ac9ee6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002940738s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-499282 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-499282 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-499282 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-499282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [ec8ce7c7-0e42-4d69-a6f9-dd4f3b2f15a0] Pending
helpers_test.go:353: "sp-pod" [ec8ce7c7-0e42-4d69-a6f9-dd4f3b2f15a0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [ec8ce7c7-0e42-4d69-a6f9-dd4f3b2f15a0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004067195s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-499282 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-499282 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-499282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [db073cb1-db63-4433-8f2f-e0f197dc6023] Pending
helpers_test.go:353: "sp-pod" [db073cb1-db63-4433-8f2f-e0f197dc6023] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003846601s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-499282 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh -n functional-499282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cp functional-499282:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2582648009/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh -n functional-499282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh -n functional-499282 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/309898/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /etc/test/nested/copy/309898/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/309898.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /etc/ssl/certs/309898.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/309898.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /usr/share/ca-certificates/309898.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3098982.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /etc/ssl/certs/3098982.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/3098982.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /usr/share/ca-certificates/3098982.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-499282 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh "sudo systemctl is-active docker": exit status 1 (337.241923ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh "sudo systemctl is-active containerd": exit status 1 (387.808177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499282 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499282 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-499282 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 331417: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-499282 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-499282 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-499282 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [895d3a90-9c4b-467d-ac8b-cd4821927de2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [895d3a90-9c4b-467d-ac8b-cd4821927de2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003649081s
I0110 09:19:49.817815  309898 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-499282 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.94.200 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-499282 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-499282 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-499282 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-phg9r" [9efa4d5c-b9bb-4563-a9ee-96c699c26aa7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-phg9r" [9efa4d5c-b9bb-4563-a9ee-96c699c26aa7] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004158521s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "380.05234ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "55.706943ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "381.426124ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "57.761128ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdany-port3077938529/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768036805106023459" to /tmp/TestFunctionalparallelMountCmdany-port3077938529/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768036805106023459" to /tmp/TestFunctionalparallelMountCmdany-port3077938529/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768036805106023459" to /tmp/TestFunctionalparallelMountCmdany-port3077938529/001/test-1768036805106023459
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.325812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 09:20:05.456684  309898 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 09:20 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 09:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 09:20 test-1768036805106023459
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh cat /mount-9p/test-1768036805106023459
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-499282 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [3887a3e7-daf8-4d78-b6e0-9c9857699212] Pending
helpers_test.go:353: "busybox-mount" [3887a3e7-daf8-4d78-b6e0-9c9857699212] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [3887a3e7-daf8-4d78-b6e0-9c9857699212] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [3887a3e7-daf8-4d78-b6e0-9c9857699212] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004003355s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-499282 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdany-port3077938529/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 service list -o json
functional_test.go:1509: Took "530.350041ms" to run "out/minikube-linux-arm64 -p functional-499282 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30160
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30160
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdspecific-port1081293711/001:/mount-9p --alsologtostderr -v=1 --port 35687]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (541.746711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 09:20:13.934500  309898 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdspecific-port1081293711/001:/mount-9p --alsologtostderr -v=1 --port 35687] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh "sudo umount -f /mount-9p": exit status 1 (372.510766ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-499282 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdspecific-port1081293711/001:/mount-9p --alsologtostderr -v=1 --port 35687] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2585865171/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2585865171/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2585865171/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T" /mount1: exit status 1 (1.06229115s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-499282 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2585865171/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2585865171/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-499282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2585865171/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls --format short --alsologtostderr
E0110 09:20:27.623239  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499282 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-499282
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499282 image ls --format short --alsologtostderr:
I0110 09:20:27.590148  336475 out.go:360] Setting OutFile to fd 1 ...
I0110 09:20:27.590334  336475 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:27.590340  336475 out.go:374] Setting ErrFile to fd 2...
I0110 09:20:27.590346  336475 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:27.590606  336475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
I0110 09:20:27.591296  336475 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:27.591460  336475 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:27.592019  336475 cli_runner.go:164] Run: docker container inspect functional-499282 --format={{.State.Status}}
I0110 09:20:27.640639  336475 ssh_runner.go:195] Run: systemctl --version
I0110 09:20:27.640700  336475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499282
I0110 09:20:27.672005  336475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/functional-499282/id_rsa Username:docker}
I0110 09:20:27.783404  336475 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499282 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-499282                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 611c6647fcbbc │ 62.6MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ localhost/minikube-local-cache-test               │ functional-499282                     │ ca794a6fd9deb │ 3.33kB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499282 image ls --format table --alsologtostderr:
I0110 09:20:28.362034  336701 out.go:360] Setting OutFile to fd 1 ...
I0110 09:20:28.362144  336701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:28.362155  336701 out.go:374] Setting ErrFile to fd 2...
I0110 09:20:28.362160  336701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:28.362408  336701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
I0110 09:20:28.363022  336701 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:28.363146  336701 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:28.363648  336701 cli_runner.go:164] Run: docker container inspect functional-499282 --format={{.State.Status}}
I0110 09:20:28.380965  336701 ssh_runner.go:195] Run: systemctl --version
I0110 09:20:28.381037  336701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499282
I0110 09:20:28.418458  336701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/functional-499282/id_rsa Username:docker}
I0110 09:20:28.531101  336701 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499282 image ls --format json --alsologtostderr:
[{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["g
cr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["
registry.k8s.io/pause:latest"],"size":"246070"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2
e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests":["docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicba
se/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4789170"},{"id":"611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":["public.ecr.aws/nginx/nginx@sha256:be49159753b31dc6d536fca5b044033e1e3e836667959ac238471b2ce50b31b0","public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"62642350"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad
918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"247562353"},{"id":"ca794a6fd9deb69c44cd97f52825b4c476764a236de4aa56bcd4f88e3930f077","repoDigests":["localhost/minikube-local-cache-test@sha256:9995049c1bf9a1c5e29994eeac56ed06d29f4d540050368a6c9e2a0ace216ba6"],"repoTags":["localhost/minikube-local-cache-test:functional-499282"],"size":"3330"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":[
"registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499282 image ls --format json --alsologtostderr:
I0110 09:20:28.078312  336616 out.go:360] Setting OutFile to fd 1 ...
I0110 09:20:28.078513  336616 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:28.078542  336616 out.go:374] Setting ErrFile to fd 2...
I0110 09:20:28.078564  336616 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:28.079108  336616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
I0110 09:20:28.080160  336616 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:28.080338  336616 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:28.080959  336616 cli_runner.go:164] Run: docker container inspect functional-499282 --format={{.State.Status}}
I0110 09:20:28.104948  336616 ssh_runner.go:195] Run: systemctl --version
I0110 09:20:28.105010  336616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499282
I0110 09:20:28.125892  336616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/functional-499282/id_rsa Username:docker}
I0110 09:20:28.235691  336616 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499282 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ca794a6fd9deb69c44cd97f52825b4c476764a236de4aa56bcd4f88e3930f077
repoDigests:
- localhost/minikube-local-cache-test@sha256:9995049c1bf9a1c5e29994eeac56ed06d29f4d540050368a6c9e2a0ace216ba6
repoTags:
- localhost/minikube-local-cache-test:functional-499282
size: "3330"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:be49159753b31dc6d536fca5b044033e1e3e836667959ac238471b2ce50b31b0
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "62642350"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4789170"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499282 image ls --format yaml --alsologtostderr:
I0110 09:20:27.747017  336533 out.go:360] Setting OutFile to fd 1 ...
I0110 09:20:27.747224  336533 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:27.747253  336533 out.go:374] Setting ErrFile to fd 2...
I0110 09:20:27.747278  336533 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:27.747604  336533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
I0110 09:20:27.748298  336533 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:27.748554  336533 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:27.749182  336533 cli_runner.go:164] Run: docker container inspect functional-499282 --format={{.State.Status}}
I0110 09:20:27.767144  336533 ssh_runner.go:195] Run: systemctl --version
I0110 09:20:27.767198  336533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499282
I0110 09:20:27.790172  336533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/functional-499282/id_rsa Username:docker}
I0110 09:20:27.919716  336533 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-499282 ssh pgrep buildkitd: exit status 1 (359.998214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image build -t localhost/my-image:functional-499282 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 image build -t localhost/my-image:functional-499282 testdata/build --alsologtostderr: (3.594080025s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-499282 image build -t localhost/my-image:functional-499282 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> aadddb44445
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-499282
--> 7a78082110b
Successfully tagged localhost/my-image:functional-499282
7a78082110b0fd6282bcf0f231f239dc6ec3049a8e9cf53092ec9032ffd11415
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-499282 image build -t localhost/my-image:functional-499282 testdata/build --alsologtostderr:
I0110 09:20:28.259094  336674 out.go:360] Setting OutFile to fd 1 ...
I0110 09:20:28.260013  336674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:28.260084  336674 out.go:374] Setting ErrFile to fd 2...
I0110 09:20:28.260105  336674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 09:20:28.260418  336674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
I0110 09:20:28.261207  336674 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:28.262066  336674 config.go:182] Loaded profile config "functional-499282": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 09:20:28.262632  336674 cli_runner.go:164] Run: docker container inspect functional-499282 --format={{.State.Status}}
I0110 09:20:28.282505  336674 ssh_runner.go:195] Run: systemctl --version
I0110 09:20:28.282554  336674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-499282
I0110 09:20:28.304094  336674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/functional-499282/id_rsa Username:docker}
I0110 09:20:28.407752  336674 build_images.go:162] Building image from path: /tmp/build.3345293034.tar
I0110 09:20:28.407840  336674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 09:20:28.417964  336674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3345293034.tar
I0110 09:20:28.422882  336674 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3345293034.tar: stat -c "%s %y" /var/lib/minikube/build/build.3345293034.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3345293034.tar': No such file or directory
I0110 09:20:28.422914  336674 ssh_runner.go:362] scp /tmp/build.3345293034.tar --> /var/lib/minikube/build/build.3345293034.tar (3072 bytes)
I0110 09:20:28.447868  336674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3345293034
I0110 09:20:28.461807  336674 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3345293034 -xf /var/lib/minikube/build/build.3345293034.tar
I0110 09:20:28.470992  336674 crio.go:315] Building image: /var/lib/minikube/build/build.3345293034
I0110 09:20:28.471069  336674 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-499282 /var/lib/minikube/build/build.3345293034 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0110 09:20:31.766914  336674 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-499282 /var/lib/minikube/build/build.3345293034 --cgroup-manager=cgroupfs: (3.295823315s)
I0110 09:20:31.766989  336674 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3345293034
I0110 09:20:31.775042  336674 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3345293034.tar
I0110 09:20:31.782421  336674 build_images.go:218] Built localhost/my-image:functional-499282 from /tmp/build.3345293034.tar
I0110 09:20:31.782450  336674 build_images.go:134] succeeded building to: functional-499282
I0110 09:20:31.782456  336674 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-499282 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 --alsologtostderr: (1.514165601s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 --alsologtostderr
E0110 09:20:22.499080  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:22.504417  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:22.514678  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:22.534988  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:22.576075  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls
E0110 09:20:22.657195  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:20:22.817566  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 --alsologtostderr
E0110 09:20:23.138412  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls
E0110 09:20:23.778723  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 --alsologtostderr
2026/01/10 09:20:24 [DEBUG] GET http://127.0.0.1:42257/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
E0110 09:20:25.063060  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-499282 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-499282
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-499282
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-499282
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (135.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 09:20:42.984789  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:21:03.465306  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:21:44.426241  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m14.379305784s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (135.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 kubectl -- rollout status deployment/busybox: (3.92822723s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-82pbj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-9dwdx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-jhhmr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-82pbj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-9dwdx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-jhhmr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-82pbj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-9dwdx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-jhhmr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-82pbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-82pbj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-9dwdx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-9dwdx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-jhhmr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 kubectl -- exec busybox-769dd8b7dd-jhhmr -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node add --alsologtostderr -v 5
E0110 09:23:06.346892  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 node add --alsologtostderr -v 5: (29.349952486s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5: (1.065138961s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-014642 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.071484524s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 status --output json --alsologtostderr -v 5: (1.127109079s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp testdata/cp-test.txt ha-014642:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2266869770/001/cp-test_ha-014642.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642:/home/docker/cp-test.txt ha-014642-m02:/home/docker/cp-test_ha-014642_ha-014642-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test_ha-014642_ha-014642-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642:/home/docker/cp-test.txt ha-014642-m03:/home/docker/cp-test_ha-014642_ha-014642-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test_ha-014642_ha-014642-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642:/home/docker/cp-test.txt ha-014642-m04:/home/docker/cp-test_ha-014642_ha-014642-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test_ha-014642_ha-014642-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp testdata/cp-test.txt ha-014642-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2266869770/001/cp-test_ha-014642-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m02:/home/docker/cp-test.txt ha-014642:/home/docker/cp-test_ha-014642-m02_ha-014642.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test_ha-014642-m02_ha-014642.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m02:/home/docker/cp-test.txt ha-014642-m03:/home/docker/cp-test_ha-014642-m02_ha-014642-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test_ha-014642-m02_ha-014642-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m02:/home/docker/cp-test.txt ha-014642-m04:/home/docker/cp-test_ha-014642-m02_ha-014642-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test_ha-014642-m02_ha-014642-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp testdata/cp-test.txt ha-014642-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2266869770/001/cp-test_ha-014642-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m03:/home/docker/cp-test.txt ha-014642:/home/docker/cp-test_ha-014642-m03_ha-014642.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test_ha-014642-m03_ha-014642.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m03:/home/docker/cp-test.txt ha-014642-m02:/home/docker/cp-test_ha-014642-m03_ha-014642-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test_ha-014642-m03_ha-014642-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m03:/home/docker/cp-test.txt ha-014642-m04:/home/docker/cp-test_ha-014642-m03_ha-014642-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test_ha-014642-m03_ha-014642-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp testdata/cp-test.txt ha-014642-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2266869770/001/cp-test_ha-014642-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m04:/home/docker/cp-test.txt ha-014642:/home/docker/cp-test_ha-014642-m04_ha-014642.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642 "sudo cat /home/docker/cp-test_ha-014642-m04_ha-014642.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m04:/home/docker/cp-test.txt ha-014642-m02:/home/docker/cp-test_ha-014642-m04_ha-014642-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m02 "sudo cat /home/docker/cp-test_ha-014642-m04_ha-014642-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 cp ha-014642-m04:/home/docker/cp-test.txt ha-014642-m03:/home/docker/cp-test_ha-014642-m04_ha-014642-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 ssh -n ha-014642-m03 "sudo cat /home/docker/cp-test_ha-014642-m04_ha-014642-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 node stop m02 --alsologtostderr -v 5: (12.123998367s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5: exit status 7 (790.913609ms)

                                                
                                                
-- stdout --
	ha-014642
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-014642-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-014642-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-014642-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:24:02.081557  351650 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:24:02.081665  351650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:24:02.081676  351650 out.go:374] Setting ErrFile to fd 2...
	I0110 09:24:02.081681  351650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:24:02.081951  351650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:24:02.082164  351650 out.go:368] Setting JSON to false
	I0110 09:24:02.082193  351650 mustload.go:66] Loading cluster: ha-014642
	I0110 09:24:02.082636  351650 config.go:182] Loaded profile config "ha-014642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:24:02.082657  351650 status.go:174] checking status of ha-014642 ...
	I0110 09:24:02.083175  351650 cli_runner.go:164] Run: docker container inspect ha-014642 --format={{.State.Status}}
	I0110 09:24:02.083726  351650 notify.go:221] Checking for updates...
	I0110 09:24:02.105954  351650 status.go:371] ha-014642 host status = "Running" (err=<nil>)
	I0110 09:24:02.105976  351650 host.go:66] Checking if "ha-014642" exists ...
	I0110 09:24:02.106267  351650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-014642
	I0110 09:24:02.133862  351650 host.go:66] Checking if "ha-014642" exists ...
	I0110 09:24:02.134178  351650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:24:02.134237  351650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-014642
	I0110 09:24:02.155368  351650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/ha-014642/id_rsa Username:docker}
	I0110 09:24:02.262311  351650 ssh_runner.go:195] Run: systemctl --version
	I0110 09:24:02.269237  351650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:24:02.289681  351650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:24:02.357399  351650 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-10 09:24:02.347925191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:24:02.357923  351650 kubeconfig.go:125] found "ha-014642" server: "https://192.168.49.254:8443"
	I0110 09:24:02.357967  351650 api_server.go:166] Checking apiserver status ...
	I0110 09:24:02.358017  351650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:24:02.370949  351650 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1266/cgroup
	I0110 09:24:02.380024  351650 api_server.go:192] apiserver freezer: "11:freezer:/docker/e30655ad0f7a27f643b4c1eb94d304491b466071f048474698886b9e1533ed5b/crio/crio-307c576b53198bd0665a6684eefafa3aa1ef3849b66fd42e6a3f16ead48afe7d"
	I0110 09:24:02.380096  351650 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e30655ad0f7a27f643b4c1eb94d304491b466071f048474698886b9e1533ed5b/crio/crio-307c576b53198bd0665a6684eefafa3aa1ef3849b66fd42e6a3f16ead48afe7d/freezer.state
	I0110 09:24:02.388162  351650 api_server.go:214] freezer state: "THAWED"
	I0110 09:24:02.388190  351650 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 09:24:02.396429  351650 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 09:24:02.396457  351650 status.go:463] ha-014642 apiserver status = Running (err=<nil>)
	I0110 09:24:02.396485  351650 status.go:176] ha-014642 status: &{Name:ha-014642 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:24:02.396572  351650 status.go:174] checking status of ha-014642-m02 ...
	I0110 09:24:02.396906  351650 cli_runner.go:164] Run: docker container inspect ha-014642-m02 --format={{.State.Status}}
	I0110 09:24:02.414175  351650 status.go:371] ha-014642-m02 host status = "Stopped" (err=<nil>)
	I0110 09:24:02.414197  351650 status.go:384] host is not running, skipping remaining checks
	I0110 09:24:02.414204  351650 status.go:176] ha-014642-m02 status: &{Name:ha-014642-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:24:02.414224  351650 status.go:174] checking status of ha-014642-m03 ...
	I0110 09:24:02.415009  351650 cli_runner.go:164] Run: docker container inspect ha-014642-m03 --format={{.State.Status}}
	I0110 09:24:02.434913  351650 status.go:371] ha-014642-m03 host status = "Running" (err=<nil>)
	I0110 09:24:02.434947  351650 host.go:66] Checking if "ha-014642-m03" exists ...
	I0110 09:24:02.435855  351650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-014642-m03
	I0110 09:24:02.453589  351650 host.go:66] Checking if "ha-014642-m03" exists ...
	I0110 09:24:02.453916  351650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:24:02.453981  351650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-014642-m03
	I0110 09:24:02.472375  351650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/ha-014642-m03/id_rsa Username:docker}
	I0110 09:24:02.578241  351650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:24:02.591930  351650 kubeconfig.go:125] found "ha-014642" server: "https://192.168.49.254:8443"
	I0110 09:24:02.591962  351650 api_server.go:166] Checking apiserver status ...
	I0110 09:24:02.592006  351650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:24:02.603393  351650 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1222/cgroup
	I0110 09:24:02.611522  351650 api_server.go:192] apiserver freezer: "11:freezer:/docker/c8c2b642ac86fea131fb186526fe97b99904ab2f8a727cd326e28029293a3092/crio/crio-d30435e0eb8b7348b5bd197959136fd795240cccb5b352430935ed95ad99ce07"
	I0110 09:24:02.611594  351650 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c8c2b642ac86fea131fb186526fe97b99904ab2f8a727cd326e28029293a3092/crio/crio-d30435e0eb8b7348b5bd197959136fd795240cccb5b352430935ed95ad99ce07/freezer.state
	I0110 09:24:02.619922  351650 api_server.go:214] freezer state: "THAWED"
	I0110 09:24:02.620002  351650 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 09:24:02.628261  351650 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 09:24:02.628292  351650 status.go:463] ha-014642-m03 apiserver status = Running (err=<nil>)
	I0110 09:24:02.628302  351650 status.go:176] ha-014642-m03 status: &{Name:ha-014642-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:24:02.628319  351650 status.go:174] checking status of ha-014642-m04 ...
	I0110 09:24:02.628741  351650 cli_runner.go:164] Run: docker container inspect ha-014642-m04 --format={{.State.Status}}
	I0110 09:24:02.645431  351650 status.go:371] ha-014642-m04 host status = "Running" (err=<nil>)
	I0110 09:24:02.645467  351650 host.go:66] Checking if "ha-014642-m04" exists ...
	I0110 09:24:02.645775  351650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-014642-m04
	I0110 09:24:02.664592  351650 host.go:66] Checking if "ha-014642-m04" exists ...
	I0110 09:24:02.664916  351650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:24:02.664962  351650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-014642-m04
	I0110 09:24:02.681962  351650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/ha-014642-m04/id_rsa Username:docker}
	I0110 09:24:02.786264  351650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:24:02.799621  351650 status.go:176] ha-014642-m04 status: &{Name:ha-014642-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 node start m02 --alsologtostderr -v 5: (19.829907902s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5: (1.395914128s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.292196871s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 stop --alsologtostderr -v 5
E0110 09:24:41.347008  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.352280  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.362677  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.383334  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.423603  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.504543  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.664911  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:41.985549  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:42.626302  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:43.906550  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:46.468023  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:24:51.589032  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:01.829844  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 stop --alsologtostderr -v 5: (37.57430705s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 start --wait true --alsologtostderr -v 5
E0110 09:25:22.310167  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:22.499543  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:25:50.192640  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:26:03.270347  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:27:25.190720  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 start --wait true --alsologtostderr -v 5: (2m44.319739554s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (202.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 node delete m03 --alsologtostderr -v 5: (10.323286002s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.134709135s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 stop --alsologtostderr -v 5: (35.924788877s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5: exit status 7 (116.915582ms)

                                                
                                                
-- stdout --
	ha-014642
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-014642-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-014642-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:28:36.781423  363919 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:28:36.781620  363919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:28:36.781650  363919 out.go:374] Setting ErrFile to fd 2...
	I0110 09:28:36.781676  363919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:28:36.781936  363919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:28:36.782163  363919 out.go:368] Setting JSON to false
	I0110 09:28:36.782229  363919 mustload.go:66] Loading cluster: ha-014642
	I0110 09:28:36.782303  363919 notify.go:221] Checking for updates...
	I0110 09:28:36.783256  363919 config.go:182] Loaded profile config "ha-014642": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:28:36.783308  363919 status.go:174] checking status of ha-014642 ...
	I0110 09:28:36.783827  363919 cli_runner.go:164] Run: docker container inspect ha-014642 --format={{.State.Status}}
	I0110 09:28:36.801944  363919 status.go:371] ha-014642 host status = "Stopped" (err=<nil>)
	I0110 09:28:36.801963  363919 status.go:384] host is not running, skipping remaining checks
	I0110 09:28:36.801970  363919 status.go:176] ha-014642 status: &{Name:ha-014642 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:28:36.801996  363919 status.go:174] checking status of ha-014642-m02 ...
	I0110 09:28:36.802296  363919 cli_runner.go:164] Run: docker container inspect ha-014642-m02 --format={{.State.Status}}
	I0110 09:28:36.826134  363919 status.go:371] ha-014642-m02 host status = "Stopped" (err=<nil>)
	I0110 09:28:36.826153  363919 status.go:384] host is not running, skipping remaining checks
	I0110 09:28:36.826160  363919 status.go:176] ha-014642-m02 status: &{Name:ha-014642-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:28:36.826178  363919 status.go:174] checking status of ha-014642-m04 ...
	I0110 09:28:36.826503  363919 cli_runner.go:164] Run: docker container inspect ha-014642-m04 --format={{.State.Status}}
	I0110 09:28:36.847591  363919 status.go:371] ha-014642-m04 host status = "Stopped" (err=<nil>)
	I0110 09:28:36.847611  363919 status.go:384] host is not running, skipping remaining checks
	I0110 09:28:36.847618  363919 status.go:176] ha-014642-m04 status: &{Name:ha-014642-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (70.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 09:29:41.347557  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m9.861082264s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (70.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 node add --control-plane --alsologtostderr -v 5
E0110 09:30:09.037357  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:30:22.499353  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 node add --control-plane --alsologtostderr -v 5: (1m17.058771208s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-014642 status --alsologtostderr -v 5: (1.135388225s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.096373327s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-320147 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-320147 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (45.143742388s)
--- PASS: TestJSONOutput/start/Command (45.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-320147 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-320147 --output=json --user=testUser: (5.840596806s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-558144 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-558144 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.066658ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08883834-714b-4c40-8c11-0d9fefac8812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-558144] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"48b21899-5da7-4c2a-8615-73211b620141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"01abdf8b-1c8f-4e58-a6ac-376f559d154f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ccf6d85-f2b8-48fd-8e94-78a146da2069","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig"}}
	{"specversion":"1.0","id":"c2588d59-f589-4471-8ee4-fd2543867c16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube"}}
	{"specversion":"1.0","id":"430ba64f-9274-449f-9506-0945a41e1e1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3357e56a-4afb-4fc6-8ab3-c704cf652840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c900be8f-6492-46ee-9aa3-016673374531","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-558144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-558144
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-516345 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-516345 --network=: (32.619994717s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-516345" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-516345
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-516345: (2.195817735s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-379285 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-379285 --network=bridge: (28.26647465s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-379285" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-379285
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-379285: (2.058701057s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.35s)

                                                
                                    
x
+
TestKicExistingNetwork (31.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 09:33:22.564872  309898 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 09:33:22.581611  309898 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 09:33:22.582481  309898 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 09:33:22.582524  309898 cli_runner.go:164] Run: docker network inspect existing-network
W0110 09:33:22.598291  309898 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 09:33:22.598322  309898 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 09:33:22.598336  309898 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 09:33:22.598447  309898 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 09:33:22.617084  309898 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b03e24b92d87 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:da:2e:21:fd:ce:73} reservation:<nil>}
I0110 09:33:22.617431  309898 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b1b140}
I0110 09:33:22.617458  309898 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 09:33:22.617509  309898 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 09:33:22.699125  309898 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-161599 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-161599 --network=existing-network: (29.567463563s)
helpers_test.go:176: Cleaning up "existing-network-161599" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-161599
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-161599: (2.139555914s)
I0110 09:33:54.423251  309898 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.88s)

                                                
                                    
x
+
TestKicCustomSubnet (30.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-636540 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-636540 --subnet=192.168.60.0/24: (28.08621255s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-636540 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-636540" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-636540
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-636540: (2.114801607s)
--- PASS: TestKicCustomSubnet (30.22s)

                                                
                                    
x
+
TestKicStaticIP (30.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-852383 --static-ip=192.168.200.200
E0110 09:34:41.353464  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-852383 --static-ip=192.168.200.200: (28.086967896s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-852383 ip
helpers_test.go:176: Cleaning up "static-ip-852383" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-852383
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-852383: (2.214666085s)
--- PASS: TestKicStaticIP (30.48s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (61.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-714042 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-714042 --driver=docker  --container-runtime=crio: (27.289509296s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-716971 --driver=docker  --container-runtime=crio
E0110 09:35:22.499298  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-716971 --driver=docker  --container-runtime=crio: (27.97769741s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-714042
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-716971
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-716971" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-716971
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-716971: (2.130466332s)
helpers_test.go:176: Cleaning up "first-714042" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-714042
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-714042: (2.389942492s)
--- PASS: TestMinikubeProfile (61.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-199130 --memory=3072 --mount-string /tmp/TestMountStartserial2022608672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-199130 --memory=3072 --mount-string /tmp/TestMountStartserial2022608672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.736694816s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-199130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-201313 --memory=3072 --mount-string /tmp/TestMountStartserial2022608672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-201313 --memory=3072 --mount-string /tmp/TestMountStartserial2022608672/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.44599554s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-201313 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-199130 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-199130 --alsologtostderr -v=5: (1.708827229s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-201313 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-201313
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-201313: (1.289457475s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-201313
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-201313: (7.139810989s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-201313 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (71.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-885817 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0110 09:36:45.552860  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-885817 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m10.953056567s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (71.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-885817 -- rollout status deployment/busybox: (3.375629081s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-ngdfr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-s2bzj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-ngdfr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-s2bzj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-ngdfr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-s2bzj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-ngdfr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-ngdfr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-s2bzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-885817 -- exec busybox-769dd8b7dd-s2bzj -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-885817 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-885817 -v=5 --alsologtostderr: (27.98286883s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-885817 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp testdata/cp-test.txt multinode-885817:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile216382407/001/cp-test_multinode-885817.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817:/home/docker/cp-test.txt multinode-885817-m02:/home/docker/cp-test_multinode-885817_multinode-885817-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m02 "sudo cat /home/docker/cp-test_multinode-885817_multinode-885817-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817:/home/docker/cp-test.txt multinode-885817-m03:/home/docker/cp-test_multinode-885817_multinode-885817-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m03 "sudo cat /home/docker/cp-test_multinode-885817_multinode-885817-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp testdata/cp-test.txt multinode-885817-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile216382407/001/cp-test_multinode-885817-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817-m02:/home/docker/cp-test.txt multinode-885817:/home/docker/cp-test_multinode-885817-m02_multinode-885817.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817 "sudo cat /home/docker/cp-test_multinode-885817-m02_multinode-885817.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817-m02:/home/docker/cp-test.txt multinode-885817-m03:/home/docker/cp-test_multinode-885817-m02_multinode-885817-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m03 "sudo cat /home/docker/cp-test_multinode-885817-m02_multinode-885817-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp testdata/cp-test.txt multinode-885817-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile216382407/001/cp-test_multinode-885817-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817-m03:/home/docker/cp-test.txt multinode-885817:/home/docker/cp-test_multinode-885817-m03_multinode-885817.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817 "sudo cat /home/docker/cp-test_multinode-885817-m03_multinode-885817.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 cp multinode-885817-m03:/home/docker/cp-test.txt multinode-885817-m02:/home/docker/cp-test_multinode-885817-m03_multinode-885817-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 ssh -n multinode-885817-m02 "sudo cat /home/docker/cp-test_multinode-885817-m03_multinode-885817-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-885817 node stop m03: (1.31074917s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-885817 status: exit status 7 (523.610463ms)

                                                
                                                
-- stdout --
	multinode-885817
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-885817-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-885817-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr: exit status 7 (528.386686ms)

                                                
                                                
-- stdout --
	multinode-885817
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-885817-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-885817-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:38:24.897132  414596 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:38:24.897309  414596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:38:24.897338  414596 out.go:374] Setting ErrFile to fd 2...
	I0110 09:38:24.897360  414596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:38:24.897648  414596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:38:24.897876  414596 out.go:368] Setting JSON to false
	I0110 09:38:24.897942  414596 mustload.go:66] Loading cluster: multinode-885817
	I0110 09:38:24.898019  414596 notify.go:221] Checking for updates...
	I0110 09:38:24.899038  414596 config.go:182] Loaded profile config "multinode-885817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:38:24.899104  414596 status.go:174] checking status of multinode-885817 ...
	I0110 09:38:24.899696  414596 cli_runner.go:164] Run: docker container inspect multinode-885817 --format={{.State.Status}}
	I0110 09:38:24.918985  414596 status.go:371] multinode-885817 host status = "Running" (err=<nil>)
	I0110 09:38:24.919014  414596 host.go:66] Checking if "multinode-885817" exists ...
	I0110 09:38:24.919323  414596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-885817
	I0110 09:38:24.942536  414596 host.go:66] Checking if "multinode-885817" exists ...
	I0110 09:38:24.942846  414596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:38:24.942902  414596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-885817
	I0110 09:38:24.963054  414596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/multinode-885817/id_rsa Username:docker}
	I0110 09:38:25.066191  414596 ssh_runner.go:195] Run: systemctl --version
	I0110 09:38:25.073125  414596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:38:25.087321  414596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:38:25.145077  414596 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 09:38:25.134748798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:38:25.145678  414596 kubeconfig.go:125] found "multinode-885817" server: "https://192.168.67.2:8443"
	I0110 09:38:25.145702  414596 api_server.go:166] Checking apiserver status ...
	I0110 09:38:25.145934  414596 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 09:38:25.157783  414596 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1249/cgroup
	I0110 09:38:25.166467  414596 api_server.go:192] apiserver freezer: "11:freezer:/docker/0e3c376cdd436e8ca49bf6a53f459b672449a40000b8f142b3e1d37652778f9d/crio/crio-dbf928942fff58aecc2f029de3983eb1c6116d81c142b17f4714c638dcd05ebb"
	I0110 09:38:25.166543  414596 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0e3c376cdd436e8ca49bf6a53f459b672449a40000b8f142b3e1d37652778f9d/crio/crio-dbf928942fff58aecc2f029de3983eb1c6116d81c142b17f4714c638dcd05ebb/freezer.state
	I0110 09:38:25.174186  414596 api_server.go:214] freezer state: "THAWED"
	I0110 09:38:25.174215  414596 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 09:38:25.182305  414596 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 09:38:25.182334  414596 status.go:463] multinode-885817 apiserver status = Running (err=<nil>)
	I0110 09:38:25.182345  414596 status.go:176] multinode-885817 status: &{Name:multinode-885817 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:38:25.182362  414596 status.go:174] checking status of multinode-885817-m02 ...
	I0110 09:38:25.182676  414596 cli_runner.go:164] Run: docker container inspect multinode-885817-m02 --format={{.State.Status}}
	I0110 09:38:25.201237  414596 status.go:371] multinode-885817-m02 host status = "Running" (err=<nil>)
	I0110 09:38:25.201265  414596 host.go:66] Checking if "multinode-885817-m02" exists ...
	I0110 09:38:25.201565  414596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-885817-m02
	I0110 09:38:25.220070  414596 host.go:66] Checking if "multinode-885817-m02" exists ...
	I0110 09:38:25.220392  414596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 09:38:25.220445  414596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-885817-m02
	I0110 09:38:25.237969  414596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/22427-308033/.minikube/machines/multinode-885817-m02/id_rsa Username:docker}
	I0110 09:38:25.341662  414596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 09:38:25.354208  414596 status.go:176] multinode-885817-m02 status: &{Name:multinode-885817-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:38:25.354242  414596 status.go:174] checking status of multinode-885817-m03 ...
	I0110 09:38:25.354572  414596 cli_runner.go:164] Run: docker container inspect multinode-885817-m03 --format={{.State.Status}}
	I0110 09:38:25.371473  414596 status.go:371] multinode-885817-m03 host status = "Stopped" (err=<nil>)
	I0110 09:38:25.371496  414596 status.go:384] host is not running, skipping remaining checks
	I0110 09:38:25.371509  414596 status.go:176] multinode-885817-m03 status: &{Name:multinode-885817-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-885817 node start m03 -v=5 --alsologtostderr: (7.733639485s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-885817
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-885817
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-885817: (24.999338564s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-885817 --wait=true -v=5 --alsologtostderr
E0110 09:39:41.347952  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-885817 --wait=true -v=5 --alsologtostderr: (47.936963513s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-885817
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-885817 node delete m03: (5.232823836s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-885817 stop: (23.727824734s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-885817 status: exit status 7 (106.688788ms)

                                                
                                                
-- stdout --
	multinode-885817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-885817-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr: exit status 7 (91.885145ms)

                                                
                                                
-- stdout --
	multinode-885817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-885817-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:40:16.814989  422461 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:40:16.815215  422461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:40:16.815229  422461 out.go:374] Setting ErrFile to fd 2...
	I0110 09:40:16.815235  422461 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:40:16.815543  422461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:40:16.815796  422461 out.go:368] Setting JSON to false
	I0110 09:40:16.815842  422461 mustload.go:66] Loading cluster: multinode-885817
	I0110 09:40:16.815939  422461 notify.go:221] Checking for updates...
	I0110 09:40:16.816291  422461 config.go:182] Loaded profile config "multinode-885817": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:40:16.816315  422461 status.go:174] checking status of multinode-885817 ...
	I0110 09:40:16.817191  422461 cli_runner.go:164] Run: docker container inspect multinode-885817 --format={{.State.Status}}
	I0110 09:40:16.835967  422461 status.go:371] multinode-885817 host status = "Stopped" (err=<nil>)
	I0110 09:40:16.835989  422461 status.go:384] host is not running, skipping remaining checks
	I0110 09:40:16.835996  422461 status.go:176] multinode-885817 status: &{Name:multinode-885817 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 09:40:16.836025  422461 status.go:174] checking status of multinode-885817-m02 ...
	I0110 09:40:16.836355  422461 cli_runner.go:164] Run: docker container inspect multinode-885817-m02 --format={{.State.Status}}
	I0110 09:40:16.857002  422461 status.go:371] multinode-885817-m02 host status = "Stopped" (err=<nil>)
	I0110 09:40:16.857028  422461 status.go:384] host is not running, skipping remaining checks
	I0110 09:40:16.857035  422461 status.go:176] multinode-885817-m02 status: &{Name:multinode-885817-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-885817 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0110 09:40:22.499499  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-885817 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (45.57334986s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-885817 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-885817
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-885817-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-885817-m02 --driver=docker  --container-runtime=crio: exit status 14 (91.525042ms)

                                                
                                                
-- stdout --
	* [multinode-885817-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-885817-m02' is duplicated with machine name 'multinode-885817-m02' in profile 'multinode-885817'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-885817-m03 --driver=docker  --container-runtime=crio
E0110 09:41:04.397595  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-885817-m03 --driver=docker  --container-runtime=crio: (27.930934781s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-885817
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-885817: exit status 80 (350.418367ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-885817 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-885817-m03 already exists in multinode-885817-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-885817-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-885817-m03: (2.021431227s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.44s)

                                                
                                    
x
+
TestScheduledStopUnix (102.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-472417 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-472417 --memory=3072 --driver=docker  --container-runtime=crio: (27.092019371s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472417 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 09:42:05.032157  430932 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:42:05.032301  430932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:42:05.032312  430932 out.go:374] Setting ErrFile to fd 2...
	I0110 09:42:05.032317  430932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:42:05.032737  430932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:42:05.033049  430932 out.go:368] Setting JSON to false
	I0110 09:42:05.033165  430932 mustload.go:66] Loading cluster: scheduled-stop-472417
	I0110 09:42:05.033518  430932 config.go:182] Loaded profile config "scheduled-stop-472417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:42:05.033611  430932 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/scheduled-stop-472417/config.json ...
	I0110 09:42:05.033791  430932 mustload.go:66] Loading cluster: scheduled-stop-472417
	I0110 09:42:05.033918  430932 config.go:182] Loaded profile config "scheduled-stop-472417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-472417 -n scheduled-stop-472417
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 09:42:05.473698  431021 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:42:05.473893  431021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:42:05.473921  431021 out.go:374] Setting ErrFile to fd 2...
	I0110 09:42:05.473940  431021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:42:05.474574  431021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:42:05.475151  431021 out.go:368] Setting JSON to false
	I0110 09:42:05.476419  431021 daemonize_unix.go:73] killing process 430955 as it is an old scheduled stop
	I0110 09:42:05.481188  431021 mustload.go:66] Loading cluster: scheduled-stop-472417
	I0110 09:42:05.481670  431021 config.go:182] Loaded profile config "scheduled-stop-472417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:42:05.481751  431021 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/scheduled-stop-472417/config.json ...
	I0110 09:42:05.481939  431021 mustload.go:66] Loading cluster: scheduled-stop-472417
	I0110 09:42:05.482049  431021 config.go:182] Loaded profile config "scheduled-stop-472417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 09:42:05.488771  309898 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/scheduled-stop-472417/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472417 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472417 -n scheduled-stop-472417
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-472417
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-472417 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 09:42:31.408421  431501 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:42:31.408639  431501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:42:31.408670  431501 out.go:374] Setting ErrFile to fd 2...
	I0110 09:42:31.408690  431501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:42:31.409073  431501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:42:31.409407  431501 out.go:368] Setting JSON to false
	I0110 09:42:31.409539  431501 mustload.go:66] Loading cluster: scheduled-stop-472417
	I0110 09:42:31.410181  431501 config.go:182] Loaded profile config "scheduled-stop-472417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:42:31.410309  431501 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/scheduled-stop-472417/config.json ...
	I0110 09:42:31.410529  431501 mustload.go:66] Loading cluster: scheduled-stop-472417
	I0110 09:42:31.410707  431501 config.go:182] Loaded profile config "scheduled-stop-472417": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-472417
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-472417: exit status 7 (69.362501ms)

                                                
                                                
-- stdout --
	scheduled-stop-472417
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472417 -n scheduled-stop-472417
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-472417 -n scheduled-stop-472417: exit status 7 (68.709051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-472417" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-472417
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-472417: (4.026715661s)
--- PASS: TestScheduledStopUnix (102.69s)

                                                
                                    
x
+
TestInsufficientStorage (10.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-104326 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-104326 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.464964022s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c6a4942-0b5a-47be-821e-4dd89360001e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-104326] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"392872b6-950c-486f-b83d-8a1325b04d25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"70f67884-04f4-4ff6-ab70-ab17c067093d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae89c9ac-8d21-449a-916d-484ff8bd7273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig"}}
	{"specversion":"1.0","id":"30364c50-dd55-404b-bd8e-a2c72bbf6702","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube"}}
	{"specversion":"1.0","id":"648db6c9-7ed9-495c-85bf-ca50b7a6d00b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b78c3fed-ef0f-4933-82f1-2707127baba3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"94407085-a9f2-4953-b525-0d959d380ce0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7d4ddb59-2c55-459e-a2e9-45735c503bd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c5b90b90-1f12-424b-a061-589f2b4266d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a08ec8ba-c36e-4c96-b75c-d8842b80384d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ef6b9f3d-c783-4c34-b030-6acec8585ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-104326\" primary control-plane node in \"insufficient-storage-104326\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8ac9f09-9318-4398-b435-4d5a92260d5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"daad0509-3c78-487c-b4cb-e612f8da4cc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"241b95e3-b732-4003-930c-b03df80444ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-104326 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-104326 --output=json --layout=cluster: exit status 7 (318.634888ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-104326","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-104326","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:43:28.351410  433359 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-104326" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-104326 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-104326 --output=json --layout=cluster: exit status 7 (301.681029ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-104326","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-104326","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 09:43:28.655314  433425 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-104326" does not appear in /home/jenkins/minikube-integration/22427-308033/kubeconfig
	E0110 09:43:28.665296  433425 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/insufficient-storage-104326/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-104326" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-104326
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-104326: (1.937794894s)
--- PASS: TestInsufficientStorage (10.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (311.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3475097523 start -p running-upgrade-516499 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3475097523 start -p running-upgrade-516499 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.096192086s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-516499 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0110 09:49:41.347928  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 09:50:22.499163  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-516499 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.353888624s)
helpers_test.go:176: Cleaning up "running-upgrade-516499" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-516499
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-516499: (2.354562628s)
--- PASS: TestRunningBinaryUpgrade (311.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (107.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0110 09:45:22.498726  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.909341631s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-749340 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-749340 --alsologtostderr: (1.383202572s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-749340 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-749340 status --format={{.Host}}: exit status 7 (64.559816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.736628186s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-749340 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (112.022187ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-749340] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-749340
	    minikube start -p kubernetes-upgrade-749340 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7493402 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-749340 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-749340 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.645548554s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-749340" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-749340
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-749340: (2.785020925s)
--- PASS: TestKubernetesUpgrade (107.80s)

                                                
                                    
x
+
TestMissingContainerUpgrade (146.35s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2963623410 start -p missing-upgrade-191186 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2963623410 start -p missing-upgrade-191186 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.073788862s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-191186
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-191186: (10.445508248s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-191186
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-191186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-191186 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.49090326s)
helpers_test.go:176: Cleaning up "missing-upgrade-191186" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-191186
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-191186: (3.471673172s)
--- PASS: TestMissingContainerUpgrade (146.35s)

                                                
                                    
x
+
TestPause/serial/Start (51.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-667994 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-667994 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.976957145s)
--- PASS: TestPause/serial/Start (51.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-667994 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-667994 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.121849226s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (313.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2486481718 start -p stopped-upgrade-316307 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2486481718 start -p stopped-upgrade-316307 --memory=3072 --vm-driver=docker  --container-runtime=crio: (39.76697238s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2486481718 -p stopped-upgrade-316307 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2486481718 -p stopped-upgrade-316307 stop: (1.30428213s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-316307 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-316307 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.034043218s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (313.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-316307
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-316307: (1.866659162s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.87s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (70.68s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-425904 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-425904 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m3.919936079s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-425904 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-425904
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-425904: (5.992300079s)
--- PASS: TestPreload/Start-NoPreload-PullImage (70.68s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (46.24s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-425904 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-425904 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.005488458s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-425904 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (46.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-634947 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-634947 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.33695ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-634947] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-634947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0110 09:53:25.553408  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-634947 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.66901464s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-634947 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-634947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-634947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.712514015s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-634947 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-634947 status -o json: exit status 2 (308.173738ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-634947","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-634947
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-634947: (1.997135836s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-634947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-634947 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.52334174s)
--- PASS: TestNoKubernetes/serial/Start (7.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22427-308033/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-634947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-634947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (478.180477ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-634947
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-634947: (1.29601664s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-634947 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-634947 --driver=docker  --container-runtime=crio: (6.960070228s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-634947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-634947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.915506ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-255897 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-255897 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (197.092396ms)

                                                
                                                
-- stdout --
	* [false-255897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 09:54:15.291432  483989 out.go:360] Setting OutFile to fd 1 ...
	I0110 09:54:15.291572  483989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:54:15.291582  483989 out.go:374] Setting ErrFile to fd 2...
	I0110 09:54:15.291589  483989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 09:54:15.291916  483989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-308033/.minikube/bin
	I0110 09:54:15.292381  483989 out.go:368] Setting JSON to false
	I0110 09:54:15.293410  483989 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9405,"bootTime":1768029451,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0110 09:54:15.293509  483989 start.go:143] virtualization:  
	I0110 09:54:15.297135  483989 out.go:179] * [false-255897] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 09:54:15.300833  483989 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 09:54:15.300915  483989 notify.go:221] Checking for updates...
	I0110 09:54:15.306638  483989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 09:54:15.309568  483989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-308033/kubeconfig
	I0110 09:54:15.312480  483989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-308033/.minikube
	I0110 09:54:15.315458  483989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 09:54:15.318407  483989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 09:54:15.321957  483989 config.go:182] Loaded profile config "force-systemd-env-646877": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 09:54:15.322069  483989 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 09:54:15.355308  483989 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 09:54:15.355468  483989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 09:54:15.419186  483989 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 09:54:15.409246657 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 09:54:15.419293  483989 docker.go:319] overlay module found
	I0110 09:54:15.422567  483989 out.go:179] * Using the docker driver based on user configuration
	I0110 09:54:15.425591  483989 start.go:309] selected driver: docker
	I0110 09:54:15.425615  483989 start.go:928] validating driver "docker" against <nil>
	I0110 09:54:15.425670  483989 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 09:54:15.429133  483989 out.go:203] 
	W0110 09:54:15.432131  483989 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0110 09:54:15.435105  483989 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-255897 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-255897" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-255897

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-255897"

                                                
                                                
----------------------- debugLogs end: false-255897 [took: 3.21611347s] --------------------------------
helpers_test.go:176: Cleaning up "false-255897" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-255897
--- PASS: TestNetworkPlugins/group/false (3.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.643557336s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-729486 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9cfed1f7-4d02-4c7d-acf4-33d7165fff27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9cfed1f7-4d02-4c7d-acf4-33d7165fff27] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003918513s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-729486 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-729486 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-729486 --alsologtostderr -v=3: (12.034846195s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486: exit status 7 (86.964238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-729486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-729486 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.18175372s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-729486 -n old-k8s-version-729486
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-c5xh5" [131040d6-af8c-40cf-8970-f218be5ab7fc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00372328s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-c5xh5" [131040d6-af8c-40cf-8970-f218be5ab7fc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003410017s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-729486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-729486 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (55.024721217s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-964204 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [13b4c695-6efc-4c91-a4a4-379b8ac827e5] Pending
helpers_test.go:353: "busybox" [13b4c695-6efc-4c91-a4a4-379b8ac827e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [13b4c695-6efc-4c91-a4a4-379b8ac827e5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004364753s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-964204 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-964204 --alsologtostderr -v=3
E0110 10:04:41.349712  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-964204 --alsologtostderr -v=3: (12.017887208s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204: exit status 7 (73.827668ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-964204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 10:05:22.499143  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-964204 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.295234494s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-964204 -n no-preload-964204
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-6m4km" [a145a098-a2df-421c-9baa-e284b7b515ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004117185s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-6m4km" [a145a098-a2df-421c-9baa-e284b7b515ab] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003771621s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-964204 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-964204 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.252277812s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (43.610688198s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-219333 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a3f12a22-072b-44a0-84f9-98b212456e49] Pending
helpers_test.go:353: "busybox" [a3f12a22-072b-44a0-84f9-98b212456e49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a3f12a22-072b-44a0-84f9-98b212456e49] Running
E0110 10:06:48.454729  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:48.459971  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:48.470591  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:48.490842  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:48.531113  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:48.611493  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:48.771840  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:49.092772  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:49.733308  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:06:51.013523  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00400945s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-219333 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-219333 --alsologtostderr -v=3
E0110 10:06:58.694609  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-219333 --alsologtostderr -v=3: (12.185982343s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333: exit status 7 (82.262757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-219333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 10:07:08.935151  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-219333 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (52.359277898s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-219333 -n embed-certs-219333
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bfb7e017-ab95-4a49-b9a3-f277223dc9f8] Pending
helpers_test.go:353: "busybox" [bfb7e017-ab95-4a49-b9a3-f277223dc9f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [bfb7e017-ab95-4a49-b9a3-f277223dc9f8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003555952s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-820203 --alsologtostderr -v=3
E0110 10:07:29.415396  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-820203 --alsologtostderr -v=3: (12.779890204s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203: exit status 7 (80.111369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-820203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-820203 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.591567898s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-820203 -n default-k8s-diff-port-820203
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vqjzg" [1b21a08c-5883-4509-a228-443593f70bc2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003953684s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vqjzg" [1b21a08c-5883-4509-a228-443593f70bc2] Running
E0110 10:08:10.376347  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004033316s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-219333 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-219333 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (33.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (33.96370684s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (33.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-sd2l8" [7a2cac55-20cf-4707-af7e-443c17ea8195] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004925936s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-sd2l8" [7a2cac55-20cf-4707-af7e-443c17ea8195] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003765626s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-820203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-820203 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (5.36s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-469953 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-469953 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (5.108920244s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-469953" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-469953
--- PASS: TestPreload/PreloadSrc/gcs (5.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.86s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-586120 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-586120 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (4.617459495s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-586120" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-586120
--- PASS: TestPreload/PreloadSrc/github (4.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-474984 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-474984 --alsologtostderr -v=3: (4.31780274s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.32s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.61s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-877054 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-877054" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-877054
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984: exit status 7 (88.079322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-474984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-474984 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (16.398093513s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-474984 -n newest-cni-474984
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (51.858299413s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-474984 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0110 10:09:32.896975  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:41.348028  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/functional-499282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:09:43.137664  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.574972672s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-255897 "pgrep -a kubelet"
I0110 10:09:56.727462  309898 config.go:182] Loaded profile config "auto-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nf5l2" [227dbee0-9f7c-4614-9206-678912885c0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-nf5l2" [227dbee0-9f7c-4614-9206-678912885c0a] Running
E0110 10:10:03.618592  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/no-preload-964204/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 10:10:05.554025  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003120549s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-lv2nf" [14e4f7d8-15f7-4023-aa83-05ac3a886355] Running
E0110 10:10:22.498758  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.009242102s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-255897 "pgrep -a kubelet"
I0110 10:10:26.305701  309898 config.go:182] Loaded profile config "kindnet-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bsxxs" [cf1cea49-8cf8-49b6-b6a5-a8d2f666ae5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bsxxs" [cf1cea49-8cf8-49b6-b6a5-a8d2f666ae5c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00821399s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.054988041s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.351971298s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-t6wtq" [77173578-e352-4df0-9270-6ac08b22271b] Running
E0110 10:11:48.454828  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/old-k8s-version-729486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00415008s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-255897 "pgrep -a kubelet"
I0110 10:11:54.109598  309898 config.go:182] Loaded profile config "calico-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mqp74" [0613b60d-cd8b-42a7-903c-f22b5edee962] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mqp74" [0613b60d-cd8b-42a7-903c-f22b5edee962] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003154339s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-255897 "pgrep -a kubelet"
I0110 10:11:59.259149  309898 config.go:182] Loaded profile config "custom-flannel-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mk7xf" [d0d2111f-b3d7-4057-81e4-438dc8cfcbeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mk7xf" [d0d2111f-b3d7-4057-81e4-438dc8cfcbeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002894519s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.775433603s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0110 10:12:55.206380  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.36766031s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-4cf9h" [b5ecbf8b-51f9-4236-ab9f-6f3636ea75cf] Running
E0110 10:13:36.166605  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/default-k8s-diff-port-820203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004900246s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-255897 "pgrep -a kubelet"
I0110 10:13:40.062296  309898 config.go:182] Loaded profile config "flannel-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gpsrb" [1f0f721b-14b9-4987-8ebe-af1536465eab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gpsrb" [1f0f721b-14b9-4987-8ebe-af1536465eab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.002805514s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-255897 "pgrep -a kubelet"
I0110 10:13:42.526348  309898 config.go:182] Loaded profile config "enable-default-cni-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zkrk8" [2b0ec774-f909-40ad-8047-ee454b2283b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zkrk8" [2b0ec774-f909-40ad-8047-ee454b2283b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004306163s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-255897 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m5.259300247s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-255897 "pgrep -a kubelet"
E0110 10:15:22.499266  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/addons-502860/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0110 10:15:22.761128  309898 config.go:182] Loaded profile config "bridge-255897": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-255897 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8stmr" [8d2d669f-1ad4-49a5-b6e9-ff39ef57924d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 10:15:24.999639  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-8stmr" [8d2d669f-1ad4-49a5-b6e9-ff39ef57924d] Running
E0110 10:15:30.120640  309898 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-308033/.minikube/profiles/kindnet-255897/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003591322s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-255897 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-255897 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-672733 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-672733" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-672733
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-757819" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-757819
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-255897 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-255897" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-255897

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-255897"

                                                
                                                
----------------------- debugLogs end: kubenet-255897 [took: 3.346098803s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-255897" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-255897
--- SKIP: TestNetworkPlugins/group/kubenet (3.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-255897 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-255897" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-255897

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-255897" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-255897"

                                                
                                                
----------------------- debugLogs end: cilium-255897 [took: 3.651445473s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-255897" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-255897
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
Copied to clipboard